Data has dangers and I think more by accident then intentional CBC exposed one (at https://www.cbc.ca/news/canada/british-columbia/whistle-buoy-brewing-ai-beer-robo-1.6755943) where we were given ‘This Vancouver Island brewery hopped onto ChatGPT for marketing material. Then it asked for a beer recipe’. You see, there is a massive issue, it has been around from the beginning of the event, but AI does not exist, it really does not. What marketing did to make easy money, the made a term and transformed it into something bankable. They were willing to betray Alan Turing at the drop of a hat, why not? The man was dead anyway and cash is king.
So they turned advanced machine learning and data repositories added a few items and they call it AI. Now we have a new show. And as CBC gives us “let’s see what happens if we ask it to give us a beer recipe,” he told CBC’s Rohit Joseph. They asked for a fluffy, tropical hazy pale ale” and we see the recipe below.
Now I have two simple questions. The first is is this a registered recipe, making this IP theft, or is this a random guess from established parameters, optionally making it worse. Random assignment of elements is dangerous on a few levels and it is not on the program to do this, but it is here so here you have it and it is a dangerous step to make. But I am more taken with option one, the program had THAT data somewhere. So in a setting we acquired classified data through clandestine needs and the program allowed for this, that is a direct danger. So what happens when that program gets to assess classified data? The skip between machine learning, deeper machine learning, data assessment and AI is a skip that is a lot wider than the grand canyon.
But there is another side, we see this with “CBC tech columnist and digital media expert Mohit Rajhans says while some people are hesitant about programs like ChatGPT, AI is already here, and it’s all around us. Health-care, finance, transportation and energy are just a few of the sectors using the technology in its programs” people are reacting to AI as it existed and it dos not, more important when ACTUAL AI is introduced, how will the people manage it then? And the added legal implications aren’t even considered at present. So what happens, when I improve the stage of a patent and make it an innovative patent? The beer example implies that this is possible and when patents are hijacked by innovative patents, what kind of a mess will we face then? It does not matter whether it is Microsoft with their ChatGPT or Google with their Bard, or was that the bard tales? There is a larger stage that is about to hit the shelves and we, the law and others are not ready for what some of the big tech are about to unleash on us. And no one is asking the real questions because there is no real documented stage of what constitutes a real AI and what rules are imposed on that. I reckon Alan Turing would be ashamed of what scientists are letting happen at this point. But that is merely my view on the matter.
I was a little baffled today. The article that I saw in Al Jazeera (at https://www.aljazeera.com/economy/2023/2/8/google-shares-tank-8-as-ai-chatbot-bard-flubs-answer-in-ad) had me. I saw the headline ‘Google shares tank 8% as AI chatbot Bard flubs answer in ad’. So I got to read and I saw “Shares of Google’s parent company lost more than $100bn after its Bard chatbot advertisement showed inaccurate information”, now there are a few issues here and one of them I mentioned before, but for the people of massively less intelligence, lets go over it again.
AI does not exist Yes, it sounds funny but that is the short and sweet of it. AI does not exist. There is machine learning and there is deeper machine learning and these two are AWESOME, but they are merely an aspect of an actual AI. We have the theory of one element, which was discovered by a Dutch physicist, the Ypsilon particle. You see, we are still in the binary age and when the Ypsilon particle is applied to computer science it all changes. You see we are users of binary technology, zero and one. No and Yes, False and True and so on. The Ypsilon particle allows for a new technology. It will allow for No, Yes, Both and Neither. That is a very different kind of chocolate my friends. The second part we need and we are missing for now are shallow circuits. IBM has that technology and as far as I now they are the only ones with their quantum computer. These two elements allow for an ACTUAL AI to become a reality.
I found an image once that might give a better view, the image below is a collection of elements that an AI needs to have, do you think that this is the case? Now consider that the Ypsilon particle is not a reality yet and Quantum computers are inly in its infancy at present.
Then we get to the next part. Here we see “The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a “launchpad for curiosity” that would help simplify complex topics, but it delivered an inaccurate answer that was spotted just hours before the launch event for Bard in Paris.” This is a different kind of candy. Before we get to any event we test and we test again and again and Google is no different, Google is not stupid, so what gives? Then we get the mother of all events “Google’s event came one day after Microsoft unveiled plans to integrate its rival AI chatbot ChatGPT into its Bing search engine and other products in a major challenge to Google, which for years has outpaced Microsoft in search and browser technology”, well apart from the small part that I intensely dislike Microsoft, these AI claims are set on massive amounts of data and Bing doesn’t have that, it lacks data and in some events it was merely copying other people’s data, which I dislike even further and to be honest, even if Bing comes with a blowjob by either Laura Vandervoort or Olivia Wilde. No way will I touch Bing, and beside that point, I do not trust Microsoft, no matter of ‘additions’ will rectify for that. It sounds a bit personal but Microsoft is done for and for them to chose ChatGPT is on them, but does not mean I will trust them, oh and the final part, there is no AI!
But it is about the error, what on earth was Google doing without thoroughly testing something? How did this get to some advertisement stage? At present Machine learning requires massive amounts of data and Google has it, Microsoft does not as far as I know, so the knee-jerk reaction is weird to say the least. So when we read “Bard is given the prompt, “What new discoveries from the James Webb Space Telescope (JWST) can I tell my nine-year-old about?” Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate, as the first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA” this is a data error, this is the consequence of people handing over data to a machine that is flawed (the data, not the machine). That is the flaw and that should have been tested for for a stage that lasts months. I can only guess how it happened here, but I can give you a nice example.
1992 In 1992 I went for a job interview. During the interview I got a question on deviation, what I did not know that statistics had deviation. I came from a shipping world and in the Netherlands declination is called deviation. So I responded ‘deviation is the difference between true and magnetic north’, which for me was correct and the interviewer saw my answer as wrong, but the interviewer had the ability to extrapolate from my answer (as well as my resume) that I came from a shipping environment. I got that job in the end and I stayed there for well over 10 years.
Anyway the article has me baffled to some degree. Google is better and more accurate all of the time, so this setting makes no sense to me. And as I read “A Google spokesperson told Reuters, “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester programme.”” Yes, but it tends to be important to have rigorous testing processes in place BEFORE you have a public release. It tends to make matters better and in this case you do not lose $100,000,000,000 which is 2,000 times the amount I wanted for my solution to sell well over 50,000,000 stadia consoles for a solution no one had thought of, which is now solely the option for Amazon, go figure and Google cancelled the Stadia, go figure again.
The third bungle I expect to see in the near future is that they fired the wrong 12,000 people, but there is time for that news as well. Yes, Wednesday is a weird day this time around, but not to worry. I get to keep my sanity playing Hogwarts Legacy which is awesome in many ways. And that I did not have to test, it was seemingly properly tested before I got the game (I have not spotted any bugs after well over 20 hours of gameplay, optionally merely one glitch).
There is a stage that is coming. I have stated it before and I am stating it again. I believe that the end of Microsoft is near. I myself am banking on 2026. They did this to themselves, it is all on them. They pushed for borders they had no business being on and they got beat three times over. Yes, I saw the news, they are buying more (in this case ChatGPT) and they will pay billions over a several years, but that is not what is killing them (it is not aiding them). The stupid people (aka their board of directors) don’t seem to learn and it is about to end the existence of Microsoft and my personal vies is ‘And so it should!’ You see, I have seen this before. A place called Infotheek in the 90’s, growth through acquisition. It did not end well for those wannabe’s. And that was in the 90’s when there was no real competition. It was the start of Asus, it was the start of a lot of things. China was nowhere near it was not in IT, now it is a powerhouse. There are a few powerhouses and a lot of them are not American. So as Microsoft spends a billion here and there it is now starting to end up being real money. They are in the process of firing 10,000 people, so there will be a brain drain and player like Tencent are waiting for that to happen. And the added parts are merely clogging all and bringing instability. Before the end of the year We get a speech on how ChatGPT will be everywhere and the massive bugs and holes in security will merely double or more. So after they got slapped in the Tablet market with their Surface joke (by Apple with the iPad), after they got slapped in the data market with their Azure (by Amazon with their AWS) and after they got slapped in the console market with their Xbox System X (by Sony with their PS5) they are about to get beat with over 20% of their cornerstone market as Adobe gets to move in soon and show Microsoft and their PowerPoint how inferior they have become (which I presume will happen after Meta launches their new Meta) Microsoft will have been beaten four times over and I am now trying to find a way to get another idea to the Amazon Luna people.
This all started today as I remembered something I told a blogger and that turned into an idea and here I am committing this to a setting that is for the eyes of Amazon Luna only. No prying Microsoft eyes. I have been searching mind and systems and I cannot find anywhere where this has been done before, a novel idea and in gaming these are rare, very rare. When adding the parts that I did write about before, I get a new stage, one that shows Microsoft the folly of buying billions of game designers and none of them have what I am about to hand Microsoft. If I have to aid a little hand to make 2026 the year of doom for Microsoft, I will. I am simply that kind of a guy. They did this all to themselves. I was a simple guy, merely awaiting the next game, the next dose of fun and Microsoft decided to buy Bethesda, which was their right. So there I was designing and thinking through new ways to bring them down and that was before I found the 50 million new accounts for the Amazon Luna (with the reservation that they can run Unreal Engine 5) and that idea grew a hell of a lot more. All stations that Microsoft could never buy, they needed committed people, committed people who can dream new solutions, not the ideas that get purchased. You see, I am certain that the existence of ChatGPT relied on a few people who are no longer there. That is no ones fault, these thing happen everywhere. Yet, when you decide to push it into existing software and existing cloud solutions, the shortcomings will start showing ever so slowly. A little here and a little there and they will overcome these issues, they really will, but they will leave a little hole in place and that is where others will find a way to have some fun. I expect that the issue with Solarwinds started in similar ways. In that instance hackers targeted SolarWinds by deploying malicious code into its Orion IT monitoring and management software. What are the chances that the Orion IT monitoring part had a similar issue? It is highly speculative, I will say that upfront, but am I right? Could I be right?
That is the question and Microsoft has made a gamble and invested more and more billions in other solutions whilst they are firing 10,000 employees. At some point these issues start working in unison making life especially hard for a lot of remaining employees at Microsoft, time will tell. I have time, do they?