I was a little baffled today. The article that I saw in Al Jazeera (at https://www.aljazeera.com/economy/2023/2/8/google-shares-tank-8-as-ai-chatbot-bard-flubs-answer-in-ad) had me. I saw the headline ‘Google shares tank 8% as AI chatbot Bard flubs answer in ad’. So I got to read and I saw “Shares of Google’s parent company lost more than $100bn after its Bard chatbot advertisement showed inaccurate information”, now there are a few issues here and one of them I mentioned before, but for the people of massively less intelligence, lets go over it again.
AI does not exist
Yes, it sounds funny but that is the short and sweet of it. AI does not exist. There is machine learning and there is deeper machine learning and these two are AWESOME, but they are merely an aspect of an actual AI. We have the theory of one element, which was discovered by a Dutch physicist, the Ypsilon particle. You see, we are still in the binary age and when the Ypsilon particle is applied to computer science it all changes. You see we are users of binary technology, zero and one. No and Yes, False and True and so on. The Ypsilon particle allows for a new technology. It will allow for No, Yes, Both and Neither. That is a very different kind of chocolate my friends. The second part we need and we are missing for now are shallow circuits. IBM has that technology and as far as I now they are the only ones with their quantum computer. These two elements allow for an ACTUAL AI to become a reality.
I found an image once that might give a better view, the image below is a collection of elements that an AI needs to have, do you think that this is the case? Now consider that the Ypsilon particle is not a reality yet and Quantum computers are inly in its infancy at present.
Then we get to the next part. Here we see “The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a “launchpad for curiosity” that would help simplify complex topics, but it delivered an inaccurate answer that was spotted just hours before the launch event for Bard in Paris.” This is a different kind of candy. Before we get to any event we test and we test again and again and Google is no different, Google is not stupid, so what gives? Then we get the mother of all events “Google’s event came one day after Microsoft unveiled plans to integrate its rival AI chatbot ChatGPT into its Bing search engine and other products in a major challenge to Google, which for years has outpaced Microsoft in search and browser technology”, well apart from the small part that I intensely dislike Microsoft, these AI claims are set on massive amounts of data and Bing doesn’t have that, it lacks data and in some events it was merely copying other people’s data, which I dislike even further and to be honest, even if Bing comes with a blowjob by either Laura Vandervoort or Olivia Wilde. No way will I touch Bing, and beside that point, I do not trust Microsoft, no matter of ‘additions’ will rectify for that. It sounds a bit personal but Microsoft is done for and for them to chose ChatGPT is on them, but does not mean I will trust them, oh and the final part, there is no AI!
But it is about the error, what on earth was Google doing without thoroughly testing something? How did this get to some advertisement stage? At present Machine learning requires massive amounts of data and Google has it, Microsoft does not as far as I know, so the knee-jerk reaction is weird to say the least. So when we read “Bard is given the prompt, “What new discoveries from the James Webb Space Telescope (JWST) can I tell my nine-year-old about?” Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate, as the first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA” this is a data error, this is the consequence of people handing over data to a machine that is flawed (the data, not the machine). That is the flaw and that should have been tested for for a stage that lasts months. I can only guess how it happened here, but I can give you a nice example.
In 1992 I went for a job interview. During the interview I got a question on deviation, what I did not know that statistics had deviation. I came from a shipping world and in the Netherlands declination is called deviation. So I responded ‘deviation is the difference between true and magnetic north’, which for me was correct and the interviewer saw my answer as wrong, but the interviewer had the ability to extrapolate from my answer (as well as my resume) that I came from a shipping environment. I got that job in the end and I stayed there for well over 10 years.
Anyway the article has me baffled to some degree. Google is better and more accurate all of the time, so this setting makes no sense to me. And as I read “A Google spokesperson told Reuters, “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester programme.”” Yes, but it tends to be important to have rigorous testing processes in place BEFORE you have a public release. It tends to make matters better and in this case you do not lose $100,000,000,000 which is 2,000 times the amount I wanted for my solution to sell well over 50,000,000 stadia consoles for a solution no one had thought of, which is now solely the option for Amazon, go figure and Google cancelled the Stadia, go figure again.
The third bungle I expect to see in the near future is that they fired the wrong 12,000 people, but there is time for that news as well. Yes, Wednesday is a weird day this time around, but not to worry. I get to keep my sanity playing Hogwarts Legacy which is awesome in many ways. And that I did not have to test, it was seemingly properly tested before I got the game (I have not spotted any bugs after well over 20 hours of gameplay, optionally merely one glitch).