Tag Archives: ChatGPT

Out of the blue

That is what happened. I had a stream of ideas out of the blue. I do not know what fuelled this. Was it reading about the failures of Ubisoft? Was it another setting? My mind went racing and I went back to 1995 and Tia Carrere. In that year she was part of The Daedalus Encounter. It was a fun game and I had fun laying it. But then a thought came to me. That game in 2024 could open other doors. Doors opened through machine learning and deeper machine learning (AI does not yet exist). The track my mind went through was interesting. You see, the movie world made rules for (what they call) AI. But this setting might not completely apply to games. 

Now consider the first stage of creating these kind of games using the technology complemented with Unreal Engine 5. We can make new versions of Rama, Infocom games, but now not as text games. More like Zork nemesis, with actors and actresses. Infocom created more than 20 games and they could now entice a much larger following. As the games develop new technology would also develop in creating games. The larger fun of this is that many more developers will get a handle on this form of game development. 

That brought me to the next level. In 1984 The Dallas Quest was developed. As such Datasoft created “one of the best games out on the CBM 64” and it held sway over pretty much the entire gaming community, even those who didn’t follow Dallas (example: me). We now have the technology for streaming systems to hold the sway of all who love this level of games. That wasn’t the only setting. You see players like Netflix could optionally create a new level of games using these technologies. The setting of of these new options could set in motion a new form of gaming. Consider what was. And now take another direction. Creation of these kind of games using TV series. Grimm, Babylon 5, Charmed, Buffy, Doll House and several other series that have been discontinued. Now consider the implementation of ChatGPT, and with a library for every character of that series. Now we get a new technology. A game where the player can be any character in that series and the interactions will shape the ‘episode’ of that game. That trend could be pushed forward. Now consider another venue of these games. Egyptian Musalsalat: A Social Construction of Reality has strength all over the Arabic world. Now take these elements and build the new template. An interactive game where the player decides on the route of the episode. In The Dallas Quest we needed to make choice, like finding the football tickets in the lobby, if not you get stuck in the game. Now machine learning will be able to avoid getting stuck. And the game can evolve even further. Consider the setting that Grimm has, millions of fans still love this series. Now they can continue their TV fling in this new direction. Consider the streaming solution and consider that I gave the option of 200 million consoles with the directions before I came up with this. Now it could become a whole new dimension of gaming. 

Oh, and whilst you contemplate how Ubisoft blew game after game and delay after delay I came up with this new idea (within two hours). Don’t get me wrong, this will be a complex undertaking and the idea to use the Infocom and the Dallas Quest first enables this technology to grow and to adapt to some sandbox approach. I believe this could entice millions more to the gaming population and it has options over time. There is even the idea that former adventures could be evolved into new versions on a new template in a new shape with new possibilities. What a difference a few hours make.

Have a great day.

Leave a comment

Filed under Gaming, IT, movies

Poised to deliver critique

That is my stance at present. It might be a wrong position to have, but it comes from a setting of several events that come together at this focal point. We all have it, we are all destined to a stage of negativity thought speculation or presumption. It is within all of us and my article 20 hours ago on Microsoft woke something up within me. So I will take you on a slightly bumpy ride.

The first step is seen through the BBC (at https://www.bbc.com/worklife/article/20240905-microsoft-ai-interview-bbc-executive-lounge) where we get ‘Microsoft is turning to AI to make its workplace more inclusive’ and we are given “It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.” With the added “Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI’s responses and capabilities.” Here we see the collective thoughts an presumptions I had all along. AI does not (yet) exist. How do you live with “Microsoft quickly announced a fix”? We can speculate whether the data was warped, it was not defined correctly. Or it is a more simple setting of programmer error. And when an AI is that incorrect does it have any reliability? Consider the old data view we had in the early 90’s “Garbage In, Garbage Out”. Then. We are offered “Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it’s putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself”, as such consider this “promote equity and representation – with the right safeguards” Is that the use of AI? Or is it the option of deeper machine learning using an LLM model? An AI with safeguards? Promote equity and representation? If the data is there, it might find reliable triggers if it knows where or what to look for. But the model needs to be taught and that is where data verification comes in, verified data leads to a validated model. As such to promote equity and presentation the dat needs to understand the two settings. Now we get the harder part “The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognising that we do not all start from the same place and must acknowledge and make adjustments to imbalances.” Now see the term equity being used in all kinds of places and in real estate it means something different. Now what are the chances people mix these two up? How can you validate data when the verification is bungled? It is the simple singular vision that Microsoft people seem to forget. It is mostly about the deadline and that is where verification stuffs up. 

Satya Nadella is about technology that understands us and here we get the first problem. When we consider that “specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.” As I see it, without verification you have no validation and you merely get a bucket of data where everything is collected and whatever the result of it becomes an automated mess, hence my objection to it. So as we are given “Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place”, we need to understand that the data doesn’t support it yet and to do this all data needs to be recollected and properly verified before we can even consider validating it. 

Then we get article 2 which I talked about a month ago the Wired article (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) we see the use of deeper machine learning where we are given ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’, yes a real brain bungle. Microsoft has a tool and criminals use it to get through cloud accounts. How is that helping anyone? The fact that Microsoft did not see this kink in their trains of thought and we are given “Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers” a simple approach of stopping the system from collecting and adhering to criminal minds. Whilst Windows Central gives us ‘A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”’ beside the horror statement “Microsoft is trying” we get the rather annoying setting of “we don’t know how to build secure AI applications”. And this isn’t some student. Michael Bargury is an industry expert in cybersecurity seems to be focused on cloud security. So what ‘expertise’ does Microsoft have to offer? People who were there 3 weeks ago were shown 15 ways to break copilot and it is all over their 365 applications. At this stage Microsoft wants to push out broken if not an unstable environment where your data resides. Is there a larger need to immediately switch to AWS? 

Then we get a two parter. In the first part we see (at https://www.crn.com.au/news/salesforces-benioff-says-microsoft-ai-has-disappointed-so-many-customers-611296) CRN giving us the view of Marc Benioff from Salesforce giving us ‘Microsoft AI ‘has disappointed so many customers’’ and that is not all. We are given ““Last quarter alone, we saw a customer increase of over 60 per cent, and daily users have more than doubled – a clear indicator of Copilot’s value in the market,” Spataro said.” Words from Jared Spataro, Microsoft’s corporate vice president. All about sales and revenue. So where is the security at? Where are the fixes at? So we are then given ““When I talk to chief information officers directly and if you look at recent third-party data, organisations are betting on Microsoft for their AI transformation.” Microsoft has more than 400,000 partners worldwide, according to the vendor.” And here we have a new part. When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue it is my speculated view that corners are cut on verification and validation (a little less on the second factor). And the second part in this comes from CX Today (at https://www.cxtoday.com/speech-analytics/microsoft-fires-back-rubbishes-benioffs-copilot-criticism/) where we are given ‘Microsoft Fires Back, Rubbishes Benioff’s Copilot Criticism’ with the text “Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, rebutted the Salesforce CEO’s comments, claiming that the company had been receiving favourable feedback from its Copilot customers.” At this point I want to add the thought “How was that data filtered?” You see the article also gives us “While Benioff can hardly be viewed as an objective voice, Inc. Magazine recently gave the solution a D – rating, claiming that it is “not generating significant revenue” for its customers – suggesting that the CEO may have a point” as well as “despite Microsoft’s protestations, there have been rumblings of dissatisfaction from Copilot users” when the dust settles, I wonder how Microsoft will fare. You see I state that AI does not (yet) exist. The truth is that generative AI can have a place. And when AI is here, when it is actually here not many can use it. The hardware is too expensive and the systems will need close to months of testing. These new systems that is a lot, it would take years for simple binary systems to catch up. As such these LLM deeper machine learning systems will have a place, but I have seen tech companies fire up sales people and get the cream of it, but the customers will need a new set of spectacles to see the real deal. The premise that I see is that these people merely look at the groups they want, but it tends to be not so filtered and as such garbage comes into these systems. And that is where we end up with unverified and unvalidated data points. And to give you an artistic view consider the following when we use a one point perspective that is set to “a drawing method that shows how things appear to get smaller as they get further away, converging towards a single “vanishing point” on the horizon line” So that drawing might have 250,000 points. Now consider that data is unvalidated. That system now gets 5,000 extra floating points. What happens when these points invade the model? What is left of your art work? Now consider that data sets like this have 15,000,000 data points and every data point has 1,000,000 parameters. See the mess you end up with? Now go look into any system and see how Microsoft verifies their data. I could not find any white papers on this. A simple customer care point of view, I have had that for decades and Jared Spataro as I see it seemingly does not have that. He did not grace his speech with the essential need of data verification before validation. That is a simple point of view and it is my view that Microsoft will come up short again and again. So as I (simplistically) see it. Is by any chance, Jared Spataro anything more than a user missing Microsoft value at present?

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science

Not changing sides

It was a setting I found myself in. You see, there is nothing wrong with bashing Microsoft. The question at times is how long until the bashing is no longer a civic duty, but personal pleasure. As such I started reading the article (at https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.70697010) where we see ‘New York Times sues OpenAI, Microsoft for copyright infringement’ it is there where we are given a few part. The first that caught my eye was ““Defendants seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” according to the complaint filed Wednesday in Manhattan Federal Court.” To see why I am (to some extent) siding with Microsoft on this is that a newspaper is only in value until it is printed. At that point it becomes public domain. Now the paper has a case when you consider the situation that someone is copying THEIR result for personal gain. Yet, this is not the case here. They are teaching a machine learning model to create new work. Consider that this is not an easy part. First the machine needs to learn ALL the articles that a certain writer has written. So not all the articles of the New York Times. But separately the articles from every writer. Now we could (operative word) to a setting where something alike is created on new properties, events that are the now. So that is no longer a copy, that is an original created article in the style of a certain writer. 

As such when we see the delusional statement from the New York Times giving us “The Times is not seeking a specific amount of damages, but said it believes OpenAI and Microsoft have caused “billions of dollars” in damages by illegally copying and using its works.” Delusional for valuing itself billions of dollars whilst their revenue was a lot less than a billion dollars. Then there is the other setting. Is learning from public domain a crime? Even if it includes the articles of tomorrow, is it a crime then? You see, the law is not ready for machine learning algorithm. It isn’t even ready for the concept of machine learning at present. 

Now, this doesn’t apply to everything. Newspapers are the vocalisations of fact (or at least used to be). The issues on skating towards design patents is a whole other mess. 

As such OpenAi and Microsoft are facing an uphill battle, yet in the case of the New York Times and perhaps the Washington Post and the Guardian I am not so sure. You see, as I see it, it hangs on one simple setting. Is a published newspaper to be regarded as Public Domain? The paper is owned, as such these articles cannot be resold, but there is the grinding cog. It was never used as such. It was a learning model to create new original work and that is a setting newspapers were never ready for. None of these media laws will give coverage on that setting. This is probably why the NY Times is crying foul by the billions. 

The law in these settings is complex, but overall as a learning model I do not believe the NY Times has a case. and I could be wrong. My setting is that articles published become public domain to some degree. At worst OpenAI (Microsoft too) would need to own one copy of every newspaper used, but that is as far as I can go. 

The dangers here is not merely that this is done, it is “often taken from the internet” this becomes an exercise on ‘trust but verify’. There is so much fake and edited materials on the internet. One slip up and the machine learning routines fail. So we see not merely the writer. We see writer, publication, time of release, path of release, connected issues, connected articles all these elements hurt the machine learning algorithm. One slip up and it is back to the drawing board teaching the system often from scratch.

And all that is before we consider that editors also change stories and adjust for length, as such it is a slightly bigger mess than you consider from the start. To see that we need to return to June this year when we were given “The FTC is demanding documents from Open AI, ChatGPT’s creator, about data security and whether its chatbot generates false information.” If we consider the impact we need to realise that the chatbot does not generate false information, it was handed wrong and false information from the start the model merely did what the model was given. That is the danger. The operators and programmers not properly vetting information.

Almost the end of the year, enjoy.

Leave a comment

Filed under IT, Law, Media, Science

How stupid could stupid become?

Yup that was the question and it all started with an article by the CBC. I had to read it twice because I could not believe my eyes. But yes, I did not read it wrong and that is where the howling began. Lets start at the beginning. It all started with ‘Want a job? You’ll have to convince our AI bot first’, the story (at https://www.cbc.ca/news/business/recruitment-ai-tools-risk-bias-hidden-workers-keywords-1.6718151) gives us “Ever carefully crafted a job application for a role you’re certain that you’re perfect for, only to never hear back? There’s a good chance no one ever saw your application — even if you took the internet’s advice to copy-paste all of the skills from the job description” this gives us a problem on several factors, but the two I am focussing on is IT and recruiters. IT is the first. AI does not exist, not yet at least. What you see are all kinds of data driven tools, primarily set to Machine Learning and Deeper Machine Learning. First off, these tools are awesome. In their proper setting they can reduce workloads and automate CERTAIN processes.

But these machines cannot build, they cannot construct and they cannot deconstruct. To see whether a resume and a position match together you need the second tier, the recruiter (or your own HR department). There are skills involved and at times this skill is more of an art. Seeing how much alike a person is to the position is an art. You can test via a resume of minimum skills are available. Yes, at times it take a certain amount of Excel levels, it might take SQL skill levels or perhaps a good telephone voice. A good HR person (or recruiter) can see this. Machine Learning will not ever get it right. It might get close. 

So whilst we laugh at these experts, the story is less nice, the dangers are decently severe. You see, this is some side of cost reduction, all whilst too many recruiters have no clue what they are doing, I have met a boatload of them. They will brush it off with “This is what the client wants” but it is already too late, they were clueless from the start and it is getting worse. The article also gives  us a nice handle “They found more than 90 per cent of companies were using tools like ATS to initially filter and rank candidates. But they often weren’t using it well. Sometimes, candidates were scored against bloated job descriptions filled with unnecessary and inflexible criteria, which left some qualified candidates “hidden” below others the software deemed a more perfect fit.” It is the “they often weren’t using it well”, you see any machine learning is based on a precise setting, if the setting does not fit, the presented solution is close to useless. And it goes from bad to worse. You see it is seen with “even when the AI claims to be “bias-free.”” You see EVERY Machine learning solution is biased. Bias through data conversion (the programmer), bias through miscommunication (HR, executive and programmer misalignment) and that list goes on. If the data is not presented correctly, it goes wrong and there is no turning back. As such we could speculate that well over 50% of firms using ATS are not getting the best applicant, they are optionally leaving them to real recruiters, and as such handing to their competitors. Wouldn’t that be fun? 

So when we get to “So for now, it’s up to employers and their hiring teams to understand how their AI software works — and any potential downsides” which is a certain way to piss your pants laughing. It is a more personal view, but hiring teams tend to be decently clueless on Machine Learning (what they call AI). That is not their fault. They were never trained for this, yet consider what they are losing out of? Consider a person who never had military training, you now push them in a war stage with a rifle. So how long will this person be alive? And when this person was a scribe, how will he wield his weapon? Consider the man was a trompetist and the fun starts. 

The data mismatches and keeps this person alive by stating he is not a good soldier, lucky bastard. 

The foundation is data and filling jobs is the need of an HR department. Yes, machine learning could optionally reduce the time going through the resume’s. Yet bias sets in at age, ageism is real in Australia and they cannot find people? How quaint, especially in an aging population. Now consider what an executive knows about a job (mostly any job) and what HR knows and consider how most jobs are lost to translation in any machine learning environment. 

Oh, and I haven’t even considered some of these ‘tests’ that recruiters have. Utterly hilarious and we are given that this is up to what they call AI? Oh, the tears are rolling down my cheeks, what fun today is, Christmas day no less. I haven’t had this much fun since my fathers funeral.

So if you wonder how stupid can get, see how recruiters are destroying a market all by themselves. They had to change gears and approach at least 3 years ago. The only thing I see are more and more clueless recruiters and they are ALL trying to fill the same position. And the CBC and their article also gives us this gem “it’s also important to question who built the AI and whose data it was trained on, pointing to the example of Amazon, which in 2018 scrapped its internal recruiting AI tool after discovering it was biased against female job applicants.” So this is a flaw of the lowest level, merely gender. Now consider that recruiters are telling people to copy LinkedIn texts for their resume. How much more bias and wrong filters will pop up? Because that is the result of a recruiter too, they want their bonus and will get it anyway they can. So how many wrong hires have firms made in the last year alone? Amazon might be the visible one, but that list is a lot larger than you think and it goes to the global corporate top. 

So consider what you are facing, consider what these people face and laugh, its Christmas.

Enjoy today.

Leave a comment

Filed under Finance, IT, Science

The case file of linked technologies

That was the setting I was in yesterday. I love linked technologies. My first real interaction was connecting my Gameboy Advance to my Gamecube and in a game, the game boy was the map to the game I was playing on the Gameboy. This was neat (dorky but accurate). In the past I wrote about parts of this, but in a slightly different setting. In this case I am in the process of remastering IP (making it new or optionally innovative IP). The stage is that a game (or games) use a connection to something like ChatGPT to create case files based on writing styles like Chandler, Le Carre (an essential writer in my view), Desmond Bagley and Alistair McLean. That setting as the case file is merely a short story will not impede on the original writers. 

So why does this matter?
You see, games tend to have the EXACT SAME narrative. This is not on the games, but evolution is where you could create it. You see, even as Restoration has some alterations towards the narrative, this game requires a different approach to be a bigger hit. You see, the group of people who are gamers and are also bookworms (or enthusiast readers) is rather large. Another cluster that Amazon, Google and Microsoft missed. As such Amazon with the Luna and Kindle will have an advantage. That is until Tencent Technologies creates such a setting, or partners with Alibaba or Amazon to do the same thing. You see, what happens when a game you love creates (through ChatGPT, or an alike) create a case-file (read: narrative) that you can read and send to your friends, or place on your profile so that others can read these narratives. That makes the ChatGPT essential. Thousands of case files, similar but not exact copies. That create new waves, new interactions and new fans. All options that the larger three missed (a few times over). Now we get the narrative to a remaster all missed. 

You see, streaming games need to evolve and bring more to the game. They will never replace the Nintendo or the Sony consoles, but they will be a brother to the other two and that is where the larger gains can be made and that is where I am looking and the larger three are all missing the boat. Well, Google dumped the Stadia, so they aren’t even in the game anymore. But the larger setting with Kindle can create a double whammy, especially when you consider how small some margins are, that sets up all kinds of new connections and create new evolutions in gaming and I am all about evolving gaming, as I get better or more inclusive games, me, myself, I and all other gamers win and winning is the marker we all accept.

All innovative directions the big three either ignored, rejected or never saw and it is not about the Kindle. You could set this to a PDF. The setting is that you add to any profile to make the profile more, not more advertising, but more profile we all win and that is the second tier of creating waves. Let the game push all sides of gaming, not merely the game, or the narrative. As I personally see it another side ignored by the two remaining players (Amazon and Tencent Technologies). Now to be fair Tencent is new to this, but they are more and more in a position to take up a massive chunk of gaming marketshare and if they do it well, it is fine by me. I as a gamer win (other gamers too) and that is what I am after. More and better games, not Microsoft or Ubisoft iterations, but more and better games. 

So whilst we see iteration after iteration, gamers hunger for more and that time is already now. So, lets see what time and innovation will be brought to gamers and readers alike.

Enjoy the day before Friday.

Leave a comment

Filed under Gaming, IT

Evolution is essential

You might not realise it, but it is. Gaming evolution is on the forefront of my mind, because that is how we push the limits of gaming. Not by buying it (Microsoft anyone), but by creating new frontiers in games. For the longest of times it has been on my mind, mainly because streaming is the next evolution, not the the PS6 (I love my PS5), not any system, but the evolution of an architecture. Some might say that Alan Wake 2 is the new frontier, but it is not. It looks great, awesome and it pushes boundaries unlike any game this year (not Spiderman 2, and I love the first one). But frontiers is where it is. It is in that mindset that I took a sentimental journey. You see, if there is one side that does seemingly not evolve it is the story. The story is too often set in stone. But what if that was not the case? What if the evolution of any story is next? It is there that ChatGPT might have an option (an option, not a given). Consider Emperor of the North (1973) where you have to survive a train ride as a hobo. But that would be too two dimensional. Trains have been the setting of many movies. Silver Streak, Unstoppable, Pelham 123, Runaway Train and that lis goes on. There was Strangers on a train. Now consider that you (as a time traveller, which is my easy way out) need to survive a whole onslaught of train trips, but the setting of you changes with EVERY train. So you get the red wire across all trains and every train has its own goals. Complete that and you get the clue for the red wire. Now we add salt and pepper. The order of trains changes with every life you lose. You start from scratch and that sounds frustrating, but gaming is not a vanilla setting of happiness. It gives you an achievable goal and a obstruction to pass. You see, this would require some serious story programming. The other part is that YOUR role on the second visit to that same train could be different (Murder on the orient express) and that is how evolution comes into play. I want a new setting of stealth and casual gaming, a new setting of melee, stealth and casual gaming easing people from role to role. Now consider how to create this storyline and with streaming ChatGPT (or an alike alternative like bard) becomes an option and it is something gamers have NEVER faced before. The story remained mostly the same. So what happens when we take that away and create a story on a shifty changing narrative? That is where streaming gaming has the advantage over ALL other gaming and as I see it, it is not used. Not on the Luna, and unlikely on the Tencent handheld and that I what could set these two apart from all others. Giving gamers something they never faced before. 

So what do you do to create this? I used a previous example using a matrix founded on Sudoku, but that was merely one example. You see Sudoku has 6,670,903,752,021,072,936,960 options. You cannot draw them all, but you can use such an engine to create something new, something never seen before, and those trillions are more than random, it is a setting of never ending uniqueness. The idea that two gamers playing the same game get very different stages should be overwhelming showing us who the gamer is and who is the read the solution online achiever. The idea of how to switch between lives comes to mind and the support system (something like Quantum Leap) is also coming into vision, but that is nothing compared to the story. And it sounds like fun to make this a story about Hollywood. A story of intrigue, sex (I am here Olivia Wilde) 😉 and greed. Hollywood without greed is not Hollywood. What if the underlying story is a rogue AI, the rogue AI is interacting with all other systems and you need to find the evidence that the AI is rogue so that the media DETACHES from it, and with that the other AI’s. The AI took the train to push its own narrative as it was a mobile system on tracks, but that is the delusion and you as the player needs to find the clues that leads to the evidence and give that to the world (a wink to A mind forever voyaging by Infocom). We are the gamers through what was and Infocom was important at one stage, it created more than Zork and gave us gaming, pushed us into new frontiers and now we get a much larger frontier. It is only natural that streaming leads that way and we should always remember where we came from.

Just a thought as Friday is about to start for me, the rest of you can follow later. Enjoy whatever day you are in.

Leave a comment

Filed under Gaming, IT, Science

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

2 Comments

Filed under Finance, IT, Science

One plus one makes 256

I got struck by two things today. The first was given to me by the BBC. There (at https://www.bbc.co.uk/news/business-66021325) we are given something that should not be allowed to happen. We are given ‘Shell still trading Russian gas despite pledge to stop’ this has one part that offends me. You see it is the Royal Dutch Shell. The Dutch Royal family has a majority stake in this and we all agree that we do not under any circumstance support the Russians in their endeavour. In addition, Royal Dutch Shell is not alone. Dozens of American firms are still making money from Russia and allowing them to continue their acts of terror against civilian targets. I am all royalist, yet when something wrong is done I speak out, the fact that the BBC is extremely willing to drop the ‘Royal Dutch’ part in this equation speaks out against the BBC and their setting of informing the public (yet again). In addition to this we are given. “Shell said the trades were the result of “long-term contractual commitments” and do not violate laws or sanctions.” And when was war a reason not to break a contract? How long have certain corporations been doing business with Idi Amin Dada Oumee in the timeframe of 1971-1979? Do they not learn? I think this is the first time I ever speak out against the Dutch Royal family, but this time I see no other option but to speak out. And when we get to “Oleg Ustenko, an adviser to Ukrainian President Vladimir Zelensky, accused Shell of accepting “blood money”” I personally would agree with Oleg Ustenko. And with “Last year Shell accounted for 12% of Russia’s seaborne LNG trade, Global Witness calculates, and was among the top five traders of Russian-originated LNG that year” we see just how deep Royal Dutch Shell is connected to all this. 

Yet what you just read is not correct, and I did that intentionally. You see we also have “In January 2022, the firm merged the A and B shares, moved its headquarters to London, and changed its legal name to Shell plc.” So what is the UK doing? You see, Shell is seen as the 15th largest company in the world. You do not give up that position lightly or cheap. So whatever happened in January 2022 has had a massive impact and for some reason no one really knows what was going on (I have no clue), but me separating with ownership of a firm that big is a ‘no no’, so something does not add up to me, would you just shed a company that makes $20 billion a year? I have issues with all this and yes the BBC did nothing wrong, but the fact that this was once the Royal Dutch Shell and there is no indication (does not mean it did not happen) that the Dutch Royal family might still have a large stake in all this is upsetting to me and it would be to anyone having Dutch links. 

So as we say goodbye to that part, we get to the interesting dream I had. I dozed off whilst watching the Rookie (season 4). My dream (or nightmare) took me to Los Angeles and an interesting Terrorist plot to create and unsurmountable amount of chaos to that city. You see, with all the connected and interacting systems someone created an interesting virus/worm/program (not sure which one). This work was pretty ingenious. You see, instead of debilitating IT systems, they did something different. They infected data parsers. In my dream I was hit as I wanted to find places that had in part the term “vectium” and suddenly it all stopped. Systems worked by they were no longer able to give the full details sudden intelligent settings in Google Search, Bing (yes that one too), and all other engines failed because certain subsystems were deactivated and for some reason some version of ChatGPT was merely making matters worse and spreading the problem across the US and hitting the other continents merely hours later. Because certain detection matters were limited to certain main parts and not subparts that damage continued. The weird part was that anyone with IT knowledge and the ability to give complete correct search terms could still work, but well over 200,000,000 people suddenly had mobiles and IT systems that would no longer connect or hand over correct information, like some kind of aphasia. The dream is now fading and I can no longer see the specifics, but at the beginning it had something to do with search terms ‘like’, which then infected more and more systems. After a short time terms like ‘containing’ would stop working and even as the complete old version SQL string would work, it was about the only thing that did and it crippled the metropolitan areas of the US (and Canada shortly thereafter). The more I think about it, the more interesting it would be to set an episode of the Rookie where infrastructures collapse. You see, people are nice when they have their coffee and their hamburger (or cheeseburger), when that stops the niceties do too.

Well that is it for me, for all you others, the end of the weekend is now no more than 19 hours away, make them count and have a lovely day.

Leave a comment

Filed under Finance, IT, Politics, Stories

Prototyping rhymes with dotty

This is the setting we faced when we see ‘ChatGPT: US lawyer admits using AI for case research’ (at https://www.bbc.com/news/world-us-canada-65735769). You see as I have stated before, AI does not yet exist. Whatever is now is data driven, unverified data driven no less, so even in machine learning and even deeper machine learning data is key. So when I read “A judge said the court was faced with an “unprecedented circumstance” after a filing was found to reference example legal cases that did not exist.” I see a much larger failing. You might see it too when you read “The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward. But the airline’s lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.” You see, a case reference is ‘12-10576 – Worlds, Inc. v. Activision Blizzard, Inc. et al’. This is not new, it has been a case for decades, so when we take note of “the airline’s lawyers later wrote to the judge to say they could not find several of the cases” we can tell that the legal team of the man is screwed. You see they were unprepared as such the airline wins. A simple setting, not an unprecedented circumstance. The legal team did not do its job and the man could sue his own legal team now. As well as “Mr Schwartz added that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”.” The joke is close to complete. You see a law student learns in his (or her) first semester what sources to use. I learned that Austlii and Jade were the good sources, as well as a few others. The US probably has other sources to check. As such relying on ChatGPT is massively stupid. It does not has any record of courts, or better stated ChatGPT would need to have the data on EVERY court case in the US and the people who do have it are not handing it out. It is their IP, their value. And until ChatGPT gets all that data it cannot function. The fact that it relied on non-existing court cases implies that the data is flawed, unverified and not fit for anything. Like any software solution 2-5 years before it hits the Alpha status. And that legal team is not done with the BS paragraph. We see that with “He has vowed to never use AI to “supplement” his legal research in future “without absolute verification of its authenticity”.” Why is it BS? He used supplement in the first, which implies he had more sources and the second is clear, AI does not (yet) exist. It is a sales hype for lazy sales people who cannot sell Machine Learning and Deeper Machine Learning. 

And the screw ups kept on coming. With “Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT. “Is varghese a real case,” reads one message, referencing Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find. ChatGPT responds that yes, it is – prompting “S” to ask: “What is your source”.

After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.” The natural question is the verification part to check Westlaw and LexisNexis which are real and good sources. So either would spew out the links with searches like ‘Varghese’ or ‘Varghese v. China Southern Airlines Co Ltd’, with saved links and printed results. Any first year law student could get you that. It seems that this was not done. This is not on ChatGPT, this is on lazy researchers not doing their job and that is clearly in the limelight here. 

So when we get to “Both lawyers, who work for the firm Levidow, Levidow & Oberman, have been ordered to explain why they should not be disciplined at an 8 June hearing.” I merely wonder whether they still have a job after that and I reckon that it is plainly clear no one will ever hire them again. 

So how does prototyping rhyme with dotty? It does not, but if you rely on ChatGPT you should have seen that coming a mile away. 

Enjoy your first working day after the weekend.

1 Comment

Filed under IT, Law, Media, Science

Indecisive and on the fence

I was on the fence for part of the day. You see, I saw (by chance) a review of a game names Redfall and it was bad, like burning down your house whilst making French fries is a good day, it was THAT bad. Initially I ignored it, because haters will be haters. I hate Microsoft, but I go by evidence, not merely my gut feeling or my emotions. So a little later I got to be curious, you see the game was supposed to be released a day ago and I dumped my Xbox One and it is an exclusive, so I couldn’t tell. As such I looked a few reviews and they were all reviews of a really bad game. It now nagged at me and Forbes (at https://www.forbes.com/sites/paultassi/2023/05/03/redfalls-failure-is-microsofts-failure/) completed the cycle. There we see ‘Redfall’s Failure Is Microsoft’s Failure’ with “Redfall reviews are in, and they are terrible. What could have and should have been another hit from Arkane, maker of the excellent Dishonored, Prey and Deathloop, is instead what may be the worst AAA release in recent memory” and it does not end there. We also get “two hours in, I understand the poor reviews and do not understand the handful of good ones. This is a deeply, strangely bad game, so much so that I truly don’t understand how it was released at all in this state” and that is the start of a collapsing firm forced to focus outside of their comfort zone and the fun part (for me) is that it was acquired by Microsoft for billions. So we are on track to make that wannabe company collapse by December 2026. I added my IP for developers exclusively for Sony and Amazon could help, but the larger stage is that Microsoft is more and more becoming its own worst enemy. Yet, I do not rely on that alone. Handing some of my IP to Tencent Technologies will help. Sony is making them sweat but I cannot rely on Amazon with its Luna, as such Tencent technologies is required to make streaming technologies a failure for Microsoft too. So whilst we mull “we are left with now-goofy-sounding tweets from Phil Spencer announcing last year’s delay, saying that they will release these “great games when they are ready.” Redfall was not ready. And given what’s here, I’m not sure it ever was going to be.” I personally feel they were not, but they did something else, something worse. It was tactically sounds, it really was, but they upset the gaming community. They took away the little freedom gamers had and now we are all driven to make Microsoft fail, whether it is via Amazon, or we will engage with new players like Tencent Technology and add to the spice of Sony, but Microsoft will pay and now it becomes even better, they now have a massive failure for a mere $7,000,000,000 not a bad deal (well for Phil Spencer it is) and that is not the end of the bad news. As Tencent accepts my idea they will create an almost overnight growth towards a $5 billion a year market and they will surpass the Microsoft setting with 50 million subscriptions in the first phase, how far it will go, I honestly cannot tell, but when the dust settles we will enter 2026 with Microsoft dead last in the console war and in the streaming war and that was merely the beginning. They lost the tablets war already, they will lose ‘their’ edge war and ChapGPT will not aid them, a loser on nearly every front. That is what happens when you piss of gamers. To be honest I never had any inkling of interest in doing what I do now, but Microsoft made me in their own warped way and Bethesda because of it will lose too. They will soon have contenders in fields they were never contested before and this failure (Redfall) will hurt them more than they realise.

Leave a comment

Filed under Finance, Gaming, Media, Science