It was given yesterday, but it started earlier, it has been going on for a little while now and some people are just not happy about it all. We see this (at https://www.theguardian.com/technology/2018/may/25/facebook-google-gdpr-complaints-eu-consumer-rights), with the setting ‘Facebook and Google targeted as first GDPR complaints filed‘, they would be the one of the initial companies. It is a surprise that Microsoft didn’t make the first two in all this, so they will likely get a legal awakening coming Monday. When we see “Users have been forced into agreeing new terms of service, says EU consumer rights body”, under such a setting it is even more surprising that Microsoft did not make the cut (for now). So when we see: “the companies have forced users into agreeing to new terms of service; in breach of the requirement in the law that such consent should be freely given. Max Schrems, the chair of Noyb, said: “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the agree button – that’s not a free choice, it more reminds of a North Korean election process.”“, which is one way of putting it. The GDPR isd a monster comprised of well over 55,000 words, roughly 90 pages. The New York Times (at https://www.nytimes.com/2018/05/15/opinion/gdpr-europe-data-protection.html) stated it best almost two weeks ago when they gave us “The G.D.P.R. will give Europeans the right to data portability (allowing people, for example, to take their data from one social network to another) and the right not to be subject to decisions based on automated data processing (prohibiting, for example, the use of an algorithm to reject applicants for jobs or loans). Advocates seem to believe that the new law could replace a corporate-controlled internet with a digital democracy. There’s just one problem: No one understands the G.D.P.R.”
That is not a good setting, it tends to allow for ambiguity on a much higher level and in light of privacy that has never been a good thing. So when we see “I learned that many scientists and data managers who will be subject to the law find it incomprehensible. They doubted that absolute compliance was even possible” we are introduced to the notion that our goose is truly cooked. The info is at https://www.eugdpr.org/key-changes.html, and when we dig deeper we get small issues like “GDPR makes its applicability very clear – it will apply to the processing of personal data by controllers and processors in the EU, regardless of whether the processing takes place in the EU or not“, and when we see “Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it” we tend to expect progress and a positive wave, so when we consider Article 21 paragraph 6, where we see: “Where personal data are processed for scientific or historical research purposes or statistical purposes pursuant to Article 89(1), the data subject, on grounds relating to his or her particular situation, shall have the right to object to processing of personal data concerning him or her, unless the processing is necessary for the performance of a task carried out for reasons of public interest“, it reflects on Article 89 paragraph 1, now we have ourselves a ballgame. You see, there is plenty of media that fall in that category, there is plenty of ‘Public Interest‘, yet when we take a look at that article 89, we see: “Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject.“, so what exactly are ‘appropriate safeguards‘ and who monitors them, or who decided on what is an appropriate safeguard? We also see “those safeguards shall ensure that technical and organisational measures are in place in particular in order to ensure respect for the principle of data minimisation“, you merely have to look at market research and data manipulation to see that not happening any day soon. Merely setting out demographics and their statistics makes minimisation an issue often enough. We get a partial answer in the final setting “Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner.” Yet pseudonymisation is not all it is cracked up to be, When we consider the image (at http://theconversation.com/gdpr-ground-zero-for-a-more-trusted-secure-internet-95951), Consider the simple example of the NHS, as a patient is admitted to more than one hospital over a time period, that research is no longer reliable as the same person would end up with multiple Pseudonym numbers, making the process a lot less accurate, OK, I admit ‘a lot less‘ is overstated in this case, yet is that still the case when it is on another subject, like office home travel analyses? What happens when we see royalty cards, membership cards and student card issues? At that point, their anonymity is a lot less guaranteed, more important, we can accept that those firms will bend over backward to do the right thing, yet at what state is anonymisation expected and what is the minimum degree here? Certainly not before the final reports are done, at that point, what happens when the computer gets hacked? What was exactly an adequate safeguard at that point?
Article 22 is even more fun to consider in light of banks. So when we see: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“, when a person applies for a bank loan, a person interacts and enters the data, when that banker gets the results and we no longer see a approved/denied, but a scale and the banker states ‘Under these conditions I do not see a loan to be a viable option for you, I am so sorry to give you this bad news‘, so at what point was it a solely automated decision? Telling the story, or given the story based on a credit score, where is it automated and can that be proven?
But fear not, paragraph 2 gives us “is necessary for entering into, or performance of, a contract between the data subject and a data controller;” like applying for a bank loan for example. So when is it an issue, when you are being profiled for a job? When exactly can that be proven that this is done to yourself? And at what point will we see all companies reverting to the Apple approach? You no longer get a rejection, no! You merely are not the best fit at present time.
Paragraph 2c of that article is even funnier. So when I see the exception “is based on the data subject’s explicit consent“, We cannot offer you the job until you passed certain requirements that forces us to make a few checks, to proceed in the job application, you will have to give your explicit consent. Are you willing to do that at this time? When it is about a job, how many people will say no? I reckon the one extreme case is dopey the dwarf not explicitly consenting to drug testing for all the imaginable reasons.
And in all this, the NY Times is on my side, as we see “the regulation is intentionally ambiguous, representing a series of compromises. It promises to ease restrictions on data flows while allowing citizens to control their personal data, and to spur European economic growth while protecting the right to privacy. It skirts over possible differences between current and future technologies by using broad principles“, I do see a positive point, when this collapses (read: falls over might be a better term), when we see the EU having more and more issues trying to get a global growth the data restrictions could potentially set a level of discrimination for those inside and outside the EU, making it no longer an issue. What do you think happens when EU people get a massive boost of options under LinkedIn and this setting is not allowed on a global scale, how long until we see another channel that remains open and non-ambiguous? I do not know the answer; I am merely posing the question. I don’t think that the GDPR is a bad thing; I merely think that clarity should have been at the core of it all and that is the part that is missing. In the end the NY Times gives us a golden setting, with “we need more research that looks carefully at how personal data is collected and by whom, and how those people make decisions about data protection. Policymakers should use such studies as a basis for developing empirically grounded, practical rules“, that makes perfect sense and in that, we could see the start, there is every chance that we will see a GDPRv2 no later than early 2019, before 5G hits the ground, at that point the GDPR could end up being a charter that is globally accepted, which makes up for all the flaws we see, or the flaws we think we see, at present.
The final part we see in Fortune (at http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/), you see, even as we think we have cornered it with ‘AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It‘, we need to take one step back, it is not about the AI, it is about machine learning, which is not the same thing. With Machine learning it is about big data, see when we realise that “Big data challenges purpose limitation, data minimization and data retention–most people never get rid of it with big data,” said Edwards. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting… Algorithmic transparency means you can see how the decision is reached, but you can’t with [machine-learning] systems because it’s not rule-based software“, we get the first whiff of “When they collect personal data, companies have to say what it will be used for, and not use it for anything else“, so the criminal will not allow us to keep their personal data, to the system cannot act to create a profile to trap the fraud driven individual as there is no data to learn when fraud is being committed, a real win for organised crime, even if I say so myself. In addition, the statement “If personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process“, which comes close to a near impossibility. In the age where development of AI and using machine learning to get there, the EU just pushed themselves out of the race as they will not have any data to progress with, how is that for a Monday morning wakeup call?