The tables are starting to turn

This is a setting I always saw coming.It wasn’t magic or predestination, it was simple presumption. Presumption is speculation based on evidence, on facts. The BBC puts out a near perfect article (at https://www.bbc.co.uk/news/technology-67986611) where we see ‘What happens when you think AI is lying about you?’ There are several brilliant sides to it, as such it is best to read it for yourself. But I will use a few parts of it because there is a larger playing field in consideration. The first to realise is that AI does not exist, not yet. 

As such when we see ““Illegal content… means that the content must amount to a criminal offence, so it doesn’t cover civil wrongs like defamation. A person would have to follow civil procedures to take action,” it said. Essentially, I would need a lawyer. There are a handful of ongoing legal cases round the world, but no precedent as yet.

This is actually a much larger setting then people realise. You see “AI algorithms are only as objective as the data they are trained on, and if that data is biased or incomplete, the algorithm will reflect those biases” Yet the larger truth is that AI does not exist, it is Machine Learning or better, as such it took a programmer, a programmer implies corporate liability. That is what corporations fear, that is why everything is as muddled as possible. I reckon that Google, Microsoft and all others making AI claims are fearing. You see when you consider “The second told me I was in “unchartered territory” in England and Wales. She confirmed that what had happened to me could be considered defamation, because I was identifiable and the list had been published. But she also said the onus would be on me to prove the content was harmful. I’d have to demonstrate that being a journalist accused of spreading misinformation was bad news for me.” I believe it is a little less simple than that. You see algorithm implies programming, as such the victim has a right to demand the algorithm be put out in court for scrutiny. The lines that resulted in defamation should be open to scrutiny and that is what big-tech fears at present, because AI does not exist. It is all based on collected data and that data should be verified by the legal team of the victim and that stops everything for the revenue hungry corporations. 

In addition I would like to add an article, also by the BBC (at https://www.bbc.co.uk/news/technology-68025677) called ‘DPD error caused chatbot to swear at customer’. It clearly implies that a programmer was involved. If language skills involve swearing, who put the swear words there? When did your youngest one start to swear? They all do at some point. So what triggered this? Now consider that machine learning requires data, so where is that swear data coming from? Who inclined or instituted that to be used? So when you see ““An error occurred after a system update yesterday. The AI element was immediately disabled and is currently being updated.” Before the change could be made, however, word of the mix-up spread across social media after being spotted by a customer. One particular post was viewed 800,000 times in 24 hours, as people gleefully shared the latest botched attempt by a company to incorporate AI into its business.” Consider that AI does not exist, consider that swear words are somehow part of that library, then consider that a programmer made a booboo (this is always allowed to happen) and they are ‘updating’ this. A system is being updated to use a word library. Now consider the two separate events as one and see how much danger the revenue hungry corporations have placed themselves in. When you go by ‘Trust but verify’ we can make all kinds of assumptions, but data is the centre of that core with two circles forming a Venn diagram. One circle is data, the other is programming. Now watch how big-tech is worried, because when this goes wrong, it goes wrong in a big way and they would be accountable for billions in pay outs. It will not be a small amount and it will be almost everywhere. The one case of a defamed journalist is one and in this day and age not the smallest setting. The second is that these systems will address customers. Some will take offence and some will take these companies to court. So how much funds did they think that they could safe with these systems? All to save on a dozen employees? A setting that will decide the fate of a lot of companies and that is what some fear. Until the media and several other dodo’s start realising that AI doesn’t yet exist. At that point the court cases will explode. It will be about a firm, their programmer and the wrong implementation of data. I reckon that within 2-3 years there will be an explosion of defamation cases all over the world. The places relying on Common Law will probably be getting more and sooner than Civil Law nations, but they will both face a harsh reality. It is all gravy whilst the revenue hungry sales people are involved. When the court cases come shining through those firms will have to face harsh internal actions. That is speculation on my side, but based on the data I see at present it seems like a clear case of  precise presumption which is what the BBC in part is showing us, no matter how courts aren’t ready. In torts there are cases and this is a setting staged on programmers and data, no mystery there and that could cost those hiding behind AI are facing. It is merely my point of view, but I feel that I am closer to the truth than many others evangelising whatever they call AI.

Enjoy the weekend.

Leave a comment

Filed under Finance, IT, Law, Science

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.