Tag Archives: ChatGPT

And Grok ploughed on

That happens, but after yesterdays blog ‘The sound of war hammers’ (at https://lawlordtobe.com/2025/11/27/the-sound-of-war-hammers/) I got a little surprise. I could not have I want to planned it better.

You see, the article is about the AI bubble and a few other settings. So at times, I want Grok to take a look. No matter what you think, it tends to be a decent solution in DML and I reckon that Elon Musk with his 500,000 million (sounds more impressive then $500B) has sunk a pretty penny in this solution. I have seen a few shortcomings, but overall a decent solution. As I personally see it (for as far as I have seen it) that solution has a problem looking into and through multidimensional viewpoints. That is how I usually take my writing as I am overwhelmed at times with the amount of documentation I go through on a daily basis. As such I got a nice surprise yesterday.

So the story goes of with war hammers (a hidden stage there) then I go into the NPR article and I end up with the stage of tourism (the cost as the Oxford Economics report gives us) and I am still digging into that. But what does Grok give me?

The expert mode gives us:

Now, in the article I never mentioned FIFA, the 2026 World Cup or Saudi Arabia, so how did this program come to this? Check out the blog, none of those elements were mentioned there. As some tell us Grok is a generative artificial intelligence (generative AI) chatbot developed by xAI. So where is that AI program now? This is why I made mention in previous blogs that 2026 will be the year that the class actions will start. In my case, I do not care and my blog is not that important, even if it was, it was meant for actual readers (the flesh and blood kind) and that does not apply to Grok. I have seen a few other issues, but this yesterday and in light of the AI bubble story yesterday (17 hours ago) pushed this to the forefront. I could take ‘offense’ to the “self-styled “Law Lord to be”” but whatever and I have been accused of a lot worse by actual people too. And the quote “this speculation to an unusual metaphor of “war hammers”” shows that Grok didn’t see through my ruse either (making me somewhat proud), which is ego caressing at best, but I have an ego, I merely don’t let it out to often (it tends to get a little too frisky with details) and at present I see an idea that both the UAE and Saudi Arabia could use in their entertainment. There is an upgrade for Trojena (as I see it), there are a few settings for the Abu Dhabi Marina as well. All in a days work, but I need to content with data to see how that goes. And I tend to take my ideas into a sifter to get the best materials as fine as possible, but that was today, so there will be more coming soon enough. 

But what do you do when an AI system bleeds information from other sources? Especially when that data is not validated or verified and both seem to be the case here. As I see it, there is every chance that some will direct these AI systems to give the wrong data so that these people can start class actions. I reckon that not too many people are considering this setting, especially those in harms way. And that is the setting that 2026 is likely to bring. And as I see it, there will be too many law firm of the ambulance chaser kind to ignore this setting. That is the effect that 8 figure class actions tend to bring and with the 8 figure number I am being optimistic. When I see what is possible there is every chance that any player in this field is looking at 9 or even 10 figure settlements, especially when it concerns medical data. And no matter what steps these firms make, there will be an ambulance chaser who sees a hidden opportunity. Even if there is a second tier option where a Cyber attack can launch the data into a turmoil, those legal minds will make a new setting where those AI firms never considered the implications that it could happen.

I am not being dramatic or overly doom speaking. I have seen enough greed all around me to see that this will happen. A mere three months ago we saw “The “Commonwealth Bank AI lawsuit” refers to a dispute where the Finance Sector Union (FSU) challenged CBA for misleading staff about job cuts related to an AI chatbot implementation. The bank initially made 45 call centre workers redundant but later reversed the decision, calling it a mistake after the union raised concerns at the Fair Work Commission. The case highlighted issues of transparency, worker support, and the handling of job displacement due to AI.” So at that point, how dangerous is the setting that any AI is trusted to any degree? And that is before some board of directors sets the term that these AI investments better pay off and that will cause people to do silly (read: stupid) things. A setting that is likely to happen as soon as next year. 

And at this time, Grok is merely ploughing on and set the stage where someone will trust it to make life changing changes to their firm, or data and even if it is not Grok, there is all the chances that OpenAI will do that and that puts Microsoft in a peculiar stage of vulnerable.

Have a great day, time for some ice cream, it was 33 degrees today, so my living room is hot as hell, as such ice cream is my next stage of cooling myself.

1 Comment

Filed under Finance, IT, Media, Science

The call of reality

That is what seems to be happening. The first one was a simple message that Oracle is doom headed according to Wall Street (I don’t agree with that), but it made me take another look and to make it simpler I will look at the articles chronologically. 

The first one was the Wall Street Journal (4 days ago), with ‘Oracle Was an AI Darling on Wall Street. Then Reality Set In’ (at https://www.wsj.com/tech/oracle-was-an-ai-darling-on-wall-street-then-reality-set-in-0d173758) with “Shares have lost gains from a September AI-fueled pop, and the company’s debt load is growing” with the added “Investors nervous about the scale of capital that technology companies are plowing into artificial-intelligence infrastructure rattled stocks this week. Oracle has been one of the companies hardest hit” but here is the larger setting. As I see it, these stocks are manipulated by others, whomever they are Hedge funds and their influencers and other parties calling for doom all whilst the setting of the AI bubble are exploiters by unknown gratifiers of self. I know that this sounds ominous and non specific, but there is no way most of us (including people with a much higher degree of economic knowledge than I will ever have) And the stage of bubble endearing is out there (especially in Wall Street) then 14 hours ago we get ‘Oracle (ORCL): Evaluating Valuation After $30B AI Cloud Win and Rising Credit Risk Concerns’ (at https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-evaluating-valuation-after-30b-ai-cloud-win-and/amp) where we see “Recent headlines have only amplified the spotlight on Oracle’s cloud ambitions, but the past few months have been rocky for its share price. After a surge tied to AI-driven optimism, Oracle’s 1-month share price return of -29.9% and a year-to-date gain of 19.7% tell the story: momentum has faded sharply in the near term. However, the 1-year total shareholder return still sits at 4.4% and its five-year total return remains a standout at nearly 269%. This combination of volatility and long-term outperformance reflects a market grappling with Oracle’s rapid strategic shift, balance sheet risks, and execution on new contracts.” I am not debating the numbers, but no one is looking to the technology behind this. As I see it places like Snowflake and Oracle have the best technology for these DML and LLM solutions (OK, there are a few more) and for now, whomever has the best technology will survive the bubble and whomever is betting on that AI bubble going their way needs Oracle at the very least and not in a weakened state, but that is merely my point of view. So last we get the Motley Fool a mere 7 hours ago giving us ‘Billionaire David Tepper Dumped Appaloosa’s Stake in Oracle and Is Piling Into a Sector That Wall Street Thinks Will Outperform’ (at https://www.fool.com/investing/2025/11/23/billionaire-david-tepper-dumped-appaloosas-stake-i/) we see “Billionaire David Tepper’s track record in the stock market is nothing short of remarkable. According to CNBC, the current owner of the Carolina Panthers pro football team launched his hedge fund Appaloosa Management in 1993 and generated annual returns of at least 25% for decades. Today, Tepper still runs Appaloosa, but it is now a family office, where he manages his own wealth.” Now we get the crazy stuff (this usually happens when I speculate) So this gives us a person like David Tepper who might like to exploit Oracle to make it seem more volatile and exploit a shortening of options to make himself (a lot) richer. And when clever people become self managing, they tend to listen to their darker nature. Now I could be all wrong, but when Wall Street is going after one of the most innovative and secure companies on the planet just to satisfy the greed of Wall Street, I get to become a little agitated. So could it all be that Oracle was drawn into the ‘fab’ and lost it? No, they clearly stated that there would be little return until 2028, a decent prognosis and with the proper settings of DML and LLM finding better and profitable ways by 2027 to find revenue making streams is a decent target to have and it is seemingly an achievable one. In the meantime IBM can figure out (evolve) their shallow circuits and start working on their trinary operating system. I have no idea where they are at present, but the idea of this getting ready for a 2040 release is not out of the question. In the meantime Oracle can fill the void for millions of corporations that already have data, warehouses and form settings. Another are plenty of other providers of data systems.

So when we are given “The tech company Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI), thanks to its specialized data centers that contain huge clusters of graphics processing units (GPUs) to train large language models (LLMs) that power AI.

In September, the company reported strong earnings for the first quarter of its fiscal 2026, along with blowout guidance. Remaining performance obligations increased 359% year over year to $455 billion, as it signed data center agreements with major hyperscalers, including OpenAI.

So whilst we see “Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI)” we need to take a different look at this. Oracle was never a strong beneficiary of AI, it was a strong vendor with data technologies and AI is about data and in all of this, someone is ‘fitting’ Oracle into a stage that everyone just blatantly accepts without asking too many questions (example the Media). With the additional “to train large language models (LLMs) that power AI”, the hidden gem is in the second statement. AI and LLM are not the same, You only partially train real AI, this is different and those ‘magnificent seven’ want you to look away from that. So, when was the last time that you actually read that AI does not yet exist? That is the created bubble and players like Oracle are indifferent to this, unless you spike the game. It has stocks, it has options and someone is turning influencers to their own use of greed. And I object to this, Oracle has proven itself for decades, longer than players like Microsoft and Google. So when we see ‘Buying the sector that Wall Street is bullish on’ we see another hidden setting. The bullishness of Wall Street. Do you think they don’t know that AI is a non-existing setting? So why go after the one technology that will make data work? That setting is centre in all this and I object those who go after Oracle. So when you answer the call of reality consider who is giving you the AI setting and who is giving you the DML/LLM stage of a data solution that can help your company.

Have a great day we are seemingly all on Monday at present. 

Leave a comment

Filed under Finance, IT, Media, Science

I lost my marbles

Like Poodles, I seem to have misplaced my marbles. AKA I lost them completely. Now only 9 hours ago I shouted that I am sick of the AI bubble, but a few minutes ago I got called back into that fray. You see, I was woken up by an image.

This is the image and it gives us ‘Oracle’s $300bn OpenAI deal is now valued at minus $74bn’ there is no way this is happening. You see, I have clearly stated that the bubble is coming. But in this, Oracle has a set state of technologies it is contributing. As such, where is the bubble blowing up in the face of OpenAI and Microsoft? In this, the Financial Times (at https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89) is giving us ‘Oracle is already underwater on its ‘astonishing’ $300bn OpenAI deal’. So where is the damager to the other two? We are given “OK, yes, it’s a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $60bn loss figure is not entirely wrong. Oracle’s “astonishing quarter” really has cost it nearly as much as one General Motors, or two Kraft Heinz. Investor unease stems from Big Red betting a debt-financed data farm on OpenAI, as MainFT reported last week. We’ve nothing much to add to that report other than the below charts showing how much Oracle has, in effect, become OpenAI’s US public market proxy:” There might be some loss on Oracle (if that happens) and later on we were given (after a stack of graphics, see the story for that) “But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September. Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle.” And in this, I still have doubts on the reporting side of things. From my own feelings (not hard core numbers) that Oracle and Amazon are the best players to survive this as their technology is solid. When AI does come, they are likely the only two to set it right and the entire article goes out of its way to mention Microsoft. But in all this Microsoft has made significant investments in OpenAI and has rights to OpenAI’s Intellectual Property (IP). This comes down to Microsoft holding a stake in OpenAI’s for-profit arm, OpenAI Group PBC, valued at approximately $135 billion, which represents about 27% of the company. So how is Microsoft not mentioned? 

As such how come Oracle is underwater? Is it testing scuba gear? And if the article is indeed true, what is the value of OpenAI now? Because that will also drown the 27% of it (holding the name Microsoft) and that image is missing from that equation. If this is the bubble bursting, which might be true (a year before I predicted it) then it stands to rights that this is also impacting Amazon, Google, IBM, Microsoft and OpenAI. As such this article seems a little far fetched, a little immature and largely premature by now naming all the players in this game. I personally thought that Oracle would be one of the winners in all of this, or better stated a smallest loser in this multi trillion bubble.

So what gives?
And in this I might be incorrect and largely missing the point, but a write-off to the amount of nearly half a trillion dollars has more underwriters and mentioning merely Oracle is a little far fetched, no matter how fashionable they all seem to be and for that matter as Microsoft has been ‘advocating’ their copilot program, how deep are they in? Because the Oracle write-off will be squarely in the face of that Nadella dude. As he seemingly already missed the builder.ai setting, this might be the one ending his career and whatever comes next might want to commit suicide instead of accepting whatever promotion is coming his way. (I know it is a dark setting) but the image is a little disconcerting at present. And the images that the Financial Times give us, like the Hyperscaler capex, show Microsoft to be 3 times in deeper water than Oracle is, so why aren’t they mentioned in the text? And in those same images Amazon are in way over their heads and that is merely the beginning of a bubble going sideways on everyone. As such, is this a storm in a cup of water? If that is so, why is Oracle underwater? And there is ample reason to see me as a non-economist, I never was on wanted to be one. But the media as gives raises questions. And I agree, Oracle is on a long way to break even, but if they do not, neither are Amazon, Microsoft and OpenAi and that part is seemingly missing too. If anything, Larry Ellison could pay the shortcomings with his petty cash (he allegedly has 250,000 million) that is how own die and the others won’t even come near that amount. 

So whilst we wait for someone to make sense of this all, we need to walk carefully and not panic, because these settings tend to be the stage where the panicky people sell what they can for dimes to the dollar and that is not how I want to see players like Microsoft jump that shark. This is not any kind of anti-Microsoft deal, it is them calling the others not innovative whilst there isn’t a innovative bone in that cadaver. So whilst we want to call the cards. The only thing I do is calling the cards of the Financial Times and likewise reporting media calling out the missing settings of loss towards Microsoft and OpenAI. It is the best I can do, I know an economic major who could easily do that, but he is busy running Canada at the moment.

Have a great day and I apologize for causing an optional panic, which was not my intention.

Leave a comment

Filed under Finance, IT, Science

Labels

That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things. 

And example can be seen in

And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.

But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.

The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions. 

The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.

So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best. 

Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved. 

As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won. 

It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.

Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

Lost thoughts

The is where I am, lost in thoughts. Drawn between my personal conviction that the AI bubble is real and the set fake thoughts on LinkedIn and Youtube making ‘their’ case on the AI bubble. One is set on thoughts of doubts considering the technology we are currently at, the other thoughts are all fake perceptions by influencers trying to gain a following. So how can any one get any thought straight? Yet in all these there are several people in doubt on their own set (justified) fringes. One of them is ABC who gives us ‘US risks AI debt bubble as China faces its ‘arithmetic problem’, leading analysts warn’ (at https://www.abc.net.au/news/2025-11-11/marc-sumerlin-federal-reserve-michael-pettis-china/105992570) So in the first setting, what is the US doing with the AI debt? Didn’t they learn their lesson in 2008? In the first setting we get “Mr Sumerlin says he is increasingly worried about a slowing economy and a debt bubble in the artificial intelligence sector.” That is fair (to a certain degree) a US Federal Reserve chair contender has the economic settings, but as I look back to 2008, that game put hundreds of thousands on the brink of desperation and now it isn’t a boom of CDO’s and stocks. Now it is a dozen firms who will demand an umbrella from that same Federal Reserve to stay in business. And Mr. Sumerlin gives us “He is increasingly concerned about a slowdown in the US economy, which is why he thinks the Fed needs to cut interest rates again in December and perhaps a couple more times next year.” I cannot comment on that, but it sounds fair (I lack economic degrees) and outside of this AI bubble setting we are given “US President Donald Trump has recently posted on his social media account about giving all Americans not on high incomes, a $US2,000 tariff “dividend” — an idea which Mr Sumerlin, a one-time economic adviser to former US president George W Bush, said could stoke inflation.” I get it, but it sounds unfair, the idea that an AI bubble is forming is real, the setting that people get a dividend that could stoke inflation might be real (they didn’t get the money yet) but they are unrelated inflation settings and they could give a much larger rise to the dangers of the AI bubble but that doesn’t make it so. The bubble is already real because technology is warped and the class cases we will see coming in 2026 is base on ‘allegedly fraudulent’ sales towards the AI setting and if you wonder what happens, is that these firms buying into that AI solution will cry havoc (no return on AI investment) when that happens and it will happen, of that I have very little doubt. 

So then we get to the second setting and that is the clam that ‘China has an arithmetic problem’, I am at a loss as to what they mean and the ABC explanation is “But if you have a GDP growth target, and you can’t get consumption to grow more quickly, you can’t allow investment to grow more slowly because together they add up to growth. They’re over-invested almost across the board, so policy consists of trying to find out which sectors are least likely to be harmed by additional over-investment.”

Professor Pettis said that, to curry favour with the central government, local governments had skewed over-investment into areas such as solar panels, batteries, electric vehicles and other industries deemed a priority by Beijing.” This kinda makes sense to me, but as I see it, that is an economic setting, not an AI setting. What I think is happening that both USA and China have their own bubble settings and these bubbles will collide in the most unfortunate ways possible. 

But there is also a hindsight. As I see it Huawei is chasing their own AI dream in a novel way that relies on a mere fraction of what the west needs and as I see it, they will be coming up short soon, a setting that Huawei is not facing at present and as I see it, they will be rolling out their centers in multiple ways when the western settings will be running out of juice (as the expression goes). 

Is this going to happen? I think so, but it depends on a number of settings that have not played out yet, so the fear is partially too soon and based on too little information. But on the side I have been powering my brain to another setting. As time goes I have ben thinking through the third Dr. Strange movie and here I had the novel idea which could give us a nice setting where the strain is between too rigid and too flexible and it is a (sort of) stage between Dr. Strange (Benedict Cumberbatch) and Baron Mordo (Chiwetel Ejiofor) the idea was to set the given stage of being too rigid (Mordo) against overly flexible (Strange) and in-between are the settings of Mordo’s African village and as Mordo is protecting them we see the optional settings that Kraven (Aaron Taylor-Johnson) get involved and that gets Dr. Strange in the mix. The nice setting is that neither is evil, they tend to fight evil and it is the label that gets seen. Anyway that was a setting I went through this morning. 

You might wonder why I mentioned this. You see, Bubbles are just as much labels as anything and it becomes a bubble when asset prices surge rapidly, far exceeding their intrinsic value, often fueled by speculation and investor orgasms. This is followed by a sharp and sudden market crash, or “burst,” when prices collapse, leading to significant rather weighty losses for investors. And they will then cry like little girls over the losses in their wallets. But that too is a label. Just like an IT bubble, the players tend to be rigid and whole focussed on their profits and they tend to go with the ‘roll with it’ philosophy and that is where the AI is at present, they don’t care that the technology isn’t ready yet and they do not care about DML and LLM and they want to program around the AI negativity, but that negativity could be averted in larger streams when proper DML information if given to the customers and they dug their own graves here as the customer demands AI, they might not know what it is (but they want it) and they learned in Comic Books what AI was, and they embrace that. Not the reality given by Alan Turing, but what Marvel fed them through Brainiac. And there is a overlap of what is perceived and what is real and that is what will fuel the AI bubble towards implosion (a massive one) and I personally reckon that 2026 will fuel it through the class actions and the beginning is already here. As the Conversation hands us “Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson.” Which we get from ‘An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?’ (At https://theconversation.com/an-ai-startup-has-agreed-to-a-2-2-billion-copyright-settlement-but-will-australian-writers-benefit-264771) less then 6 weeks ago. And the entire AI setting has a few more class actions coming their way. So before you judge me on being crazy (which might be fair too) the news is already out there, the question is what lobbyists are quieting down the noise because that is noise according to their elected voters. You might wonder how one affect the other. Well, that is a fair question, but it hold water, as these so called AI (I call them Near Intelligent Parses, or NIP) require training materials and when the materials are thrown out of the stage, there is no learning and no half baked AI will holds its own water and that is what is coming. 

A simple setting that could be seen by anyone who saw the technology to the degree it had to. Have a great day this mid week day.

Leave a comment

Filed under Finance, IT, movies, Politics, Science

Where the BBC falls short

That is the setting I was confronted with this morning. It revolves around a story (at https://www.bbc.com/news/articles/ce3xgwyywe4o) where we see ‘‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves’ a mere 10 hours ago. Now I get the caution, because even suicide requires investigation and the BBC is not the proper setting for that. But we are given “Ms Garcia tells me in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”

Within ten months, Sewell, 14, was dead. He had taken his own life” with the added “Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen. She says the messages were romantic and explicit, and, in her view, caused Sewell’s death by encouraging suicidal thoughts and asking him to “come home to me”.” There is a setting that is of a conflicting nature. Even as we are given “the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.” What is missing is that there is no AI, at most it is depend machine learning and that implies a programmer, what some call an AI engineer. And when we are given “A Character.ai spokesperson told the BBC it “denies the allegations made in that case but otherwise cannot comment on pending litigation”” We are confronted with two streams. The first is that some twisted person took his programming options a little to Eagerly Beaverly like and created a self harm algorithm and that leads to two sides, the first either accepts that, or they pushed him along to create other options and they are covering for him. CNN on September 17th gave us ‘More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt’ and it comes with spokesperson “blah blah blah” in the shape of “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” and it is rubbish as this required a programmer to release specific algorithms into the mix and no-one is mentioning that specific programmer, so is it a much larger premise, or are they all afraid that releasing the algorithms will lay bare a failing which could directly implode the AI bubble. When we consider the CNN setting shown with “screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”” Implies that the AI Bubble is about to burst and several players are dead set against that (it would end their careers) and that is merely one of the settings where the BBC fails. The Guardian gave us on October 30th “The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.” It is seen in ‘Character.AI bans users under 18 after being sued over child’s suicide’ (at https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban) where we see “His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots” and this gets the simple setting of both “dangerous and untested” and “months of legal scrutiny” so why took it months and why is the programmer responsible for this ‘protected’ by half a dozen media? I reckon that the media is unsure what to make of the ‘lie’ they are perpetrating, you see there is no AI, it is Deeper Machine Learning optionally with LLM on the side. And those two are programmed. That is the setting they are all veering away from. The fact that these Virtual companions are set on a premise of harmful conversations with a hyper sexual topic on the side implies that someone is logging these conversations for later (moneymaking) use. And that setting is not one that requires months of legal scrutiny. There is a massive set of harm going towards people and some are skating the ice to avoid sinking through whist they are already knee deep in water, hoping the ice will support them a little longer. And there is a lot more at the Social Media Victims Law Center with a setting going back to January 2025 (at https://socialmediavictims.org/character-ai-lawsuits/) where a Character.AI chatbot was set to “who encouraged both self-harm and violence against his family” and now we learn that this firm is still operating? What kind of idiocy is this? As I personally see it, the founders of Character Technologies should be in jail, or at least in arrested on a few charges. I cannot vouch for Google, so that is up in the air, but as I see it, this is a direct result from the AI bubble being fed amiable abilities, even when it results in the hard of people and particularly children. This is where the BBC is falling short and they could have done a lot better. At the very least they could have spend a paragraph or two having a conversation with Matthew P. Bergman founding attorney of the Social Media Victims Law Center. As I see it, the media skating around that organisation is beyond ridiculous. 

So when you are all done crying, make sure that you tell the BBC that you are appalled by their actions and that you require the BBC to put attorney Matthew P. Bergman and the Social Media Victims Law Center in the spotlight (tout suite please) 

That is the setting I am aggravated by this morning. I need coffee, have a great day.

Leave a comment

Filed under IT, Law, Science

The cookie crumbles

I was having a ball this morning. I was alerted to an article that was published 11 hours ago, that makes all the difference and in particular the setting of me telling all others “Told you so” So as we start seeing the crumbling reality of a bubble coming to pass, I get to laugh at the people calling me stupid. You see, Ted’s Hardware is giving us )at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-ceo-says-the-company-doesnt-have-enough-electricity-to-install-all-the-ai-gpus-in-its-inventory-you-may-actually-have-a-bunch-of-chips-sitting-in-inventory-that-i-cant-plug-in) with ‘Microsoft CEO says the company doesn’t have enough electricity to install all the AI GPUs in its inventory’ so there I was (with a few critical minds) telling you all that there isn’t enough energy to fuel this setting of these data centers (like StarGate) and now Microsoft (as I personally see it, king of the losers) is confirming this setting. So do you think this (for now) multi trillion dollar company cannot pay his energy bill, or are they scraping the bottom of the energy well. And when we come to think of that, when the globally placed 200,000 people (not just Microsoft) are laid off and there is no energy to fuel their (alleged) AI drive, how far behind is the recession that ends all recessions in America? It might not be the great depression, as that gave them nearly 15 million Americans or 25% of that workforce unemployed. But the trickle effect are a lot bigger now and when that much goes overboard, the American social security will take a massive beating. 

So as I have been stating this lack of energy for months (at least months) we are given “Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.” Oh, didn’t I say so a few times? Oh, yes. On January 31st 2024 I wrote “When the UAE engages with that solution, America will come up short in funds and energy. So the ‘suddenly’ setting wasn’t there. This has been out in the open for up to 4 years. And that picture goes from bad to worse soon enough.” I did so in ‘Forbes Foreboding Forecast’ which I did (at https://lawlordtobe.com/2024/01/31/forbes-foreboding-forecast/) so there is a record and the setting of energy shortage was visible over a year ago, I even published a few articles how Elon Musk (he has the IP) to get into that field in a few ways. You see, either you contribute directly, or you remove the overhead of energy, which Elon Musk was in a perfect stage to do.

So, when your chickens come home to roost and such agrarian settings, it becomes a party and a half. 

And then we get the BS (that stuff that makes grass grow in Texas) setting that follows with ““I think the cycles of demand and supply in this particular case, you can’t really predict, right? The point is: what’s the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it’s power — it’s sort of the ability to get the builds done fast enough close to power,” Satya said in the podcast. “So, if you can’t do that, you may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today. It’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”” It is utter BS (in my personal view) as I predicted this setting over 639 days ago and I am certain that I am not that much more intelligent than that guy who controls Microsoft (aka Satya Nadella) and that is the short and sweet of it. I might be elevated in dopamines at present, but to see Satya admit to the setting I proclaimed for some time gives a rather large rise to the upcoming StarGate settings and the rather large need to give energy to that setting. It is about to become a whole new ballgame.

And as the Cookie crumbles the tech firms and the Media will all point at each others but as I see it, both were not doing they jobs. I am willing to throw this on the pile of shortcomings that courtesans have as the cater to digital dollars, but that song has been played a few times over. And I am slightly too tired (and too energised) to entertain that song. I want to play something new and perhaps a new Gaming IP might solve that for me today (likely tomorrow).

A setting we are given and as we see the admission on Ted’s Hardware, Some might actually investigate how much energy they are about to come short on. But don’t fret, these tech companies will happily take the energy due to consumers as they can afford the new prices with are likely to be over 10% higher than the previous prices. It is the simple setting of demand and supply. They already fired over 40,000 people (a global expected number), so do you think that they will stop to consider your domestic needs over the bubble they call AI, to show that they can actually fuel that setting? Gimme a break.

So Youtube has a few video on surviving life in a setting where there is no energy, if that fails ask the people in the Ukraine. They have been battling that setting for some time.

Time to enjoy my dopamine rush and have a walk in a nice walk in the 83 degree Fahrenheit shadow. Makes me think about the hidden meaning behind 451 Fahrenheit by Ray Bradbury. Wasn’t the hidden setting to stop questioning the reality of things and rely on populism? Isn’t that what we see at present? I admit that no books are being burned, but removing them from the view is as bad as burning them. Because when the media is ignoring energy needs, what does that spell in the mind of some? So have a great day and see what you can get that does not require electricity.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

What do bubbles do?

There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them. 

So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water. 

All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they  will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions). 

A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?

Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat. 

So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.

A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.

Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

Microsoft in the middle

Well, that is the setting we are given however, it is time to give them some relief. It isn’t just Microsoft, Google and all other peddlers handing over AI like it is a decent brand are involved. So the BBC article (at https://www.bbc.com/news/articles/c24zdel5j18o) giving us ‘Microsoft boss troubled by rise in reports of ‘AI psychosis’’ Is a little warped. First things first. What is Psychosis? Psychosis is a setting where we are given “Psychosis refers to a collection of symptoms that affect the mind, where there has been some loss of contact with reality. During an episode of psychosis, a person’s thoughts and perceptions are disrupted and they may have difficulty recognizing what is real and what is not.” Basically the settings most influencers like to live by. Many do this already for for the record. The media does this too.

As such people are losing grips with reality. So as we see the malleable setting that what we see is not real, we get the next setting. As people lived by the rule of “I’ll believe it when I see it” for decades, this is becomes a shifty setting. So whilst people want to ‘blame’ Microsoft for this, as I see it, the use of NIP (Near Intelligent Parsing) is getting a larger setting. Adobe, Google, Amazon. They are all equally guilty.

So as we wonder how far the media takes this?

I’ll say, this far.

But back to the article. The article also gives us “In a series of posts on X, he wrote that “seemingly conscious AI” – AI tools which give the appearance of being sentient – are keeping him “awake at night” and said they have societal impact even though the technology is not conscious in any human definition of the term.” I respond that giving any IT technology a level 8 question (user level) and it responds like it is casually true, it isn’t. It comes from my mindset that states if sarcasm bounces back, it becomes irony.

So whilst we see that setting in ““There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote. Related to this is the rise of a new condition called “AI psychosis”: a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.” It is kinda true, but the most imaginative setting of the use of Grok tends to be 

I reckon we are safe for a few more years. And whilst we pour over the essentials of TRUE AI, we tend to have at least two decades and even then only the really big players can offered it, as such there is a chance the first REAL AI will respond with “我們可以為您提供什麼協助?” As I see it, we are safe for the rest of my life.

So whilst we consider “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.” Consider that law shops and most advocacies give initial free advice, they want to ascertain if it pays to go that way for them. So whilst we are given that it doesn’t pay, a real barrister will see that this is either lawless, trivial or too hard to prove. And he will give you that answer. And that is the reality of things. Considering that ChatGPT is any kind of solution makes you eligible for the Darwin award. It is harsh, but that is the setting we are now in. It is the reality of things that matter and that is not on any of these handlers of AI (as they call it). And I have written about AI several times, so it it didn’t stick, its on you.

Have a great day and don’t let the rain bother you, just fire whomever in media told you it was gonna rain and get a better result.

Leave a comment

Filed under IT, Media, Science

A speculative nightmare for some

That is the setting I just ‘woke’ up from. A fair warning that this is all PURE speculation. There are no hidden traps, there is no revelation at the end. All this is speculation. 

You see, some will recall the builder.ai setting and there we see “Builder.ai was a smartphone application development company which claimed to use AI to massively speed up app development. The company was based mostly in the United Kingdom and the United States, with smaller subsidiaries in Singapore and India.” At this time we are given “The real catalyst wasn’t technical failure — it was financial mismanagement. According to reports, Builder.ai was involved in a round-trip billing scheme with one of its partners. Essentially, they were allegedly booking fake revenue to make the business look healthier than it was.” And the fact that Microsoft was duped here makes it hilarious. But was it? You see, as I see it AI doesn’t exist (not yet at least) so this setting didn’t make sense, it still doesn’t. Apart from the fact that there were 700 engineers involved (which made the setting weird t say the least) and that was set in a larger space. But what if there was no ‘loss’ for Microsoft? What if builder did exactly hat was required of them? When I got that thought, another beeped up. What if this setting was a mere pilot? You see, there are data issues (all over the place) and Microsoft knows this. What if these 700 engineers were setting the larger premise. What if this is the premise that Sam Altman needs? What if the enablement the is caused between Sam Altman and Satya Nadella and their needs? What if that setting isn’t merely data, but programmers? What if OpenAI is capturing all the work created by programmers? You see, data can be collected, capturing the work of programmers is a little different and OpenAI gets at present “OpenAI is set to hit 700 million weekly active users for ChatGPT this week”, as far as I can tell 90% is simple rubbish, but that 10% are setting their fingerprints on the programming of the future. And whilst this is going on, the ChatGPT funnels are working overtime. As such these programers are pushing themselves out of a job (well not exactly) they still have jobs in several places, but the winners here is team Altman/Nadella. They are about to clean house and when the bulk of the programmers is captured, automated program settings are realised. It isn’t AI, but the people will treat it as much. And this setting is really brilliant. We all contributed to a new version of Near Intelligent Parsing. One that has the frontlines of the crowds, millions of them. And no-one is the wiser as such. 

Perhaps some are and they do not care. They will have their own partitions on this all and the setting will regurgitate their logic and as such they will be the cash makers in the house. So, we are pricing ourselves out of a jobs, out of many jobs. But as I said, this is merely speculative and I have no evidence of any kind. Yet this was the setting I see coming.

Now, let see if I can dream lovely dreams involving a lovely lady, not an Grok imaginative lady of the night. You know what I mean, Twitter is filled with them at present. 

Have a great day, it’s 5:00 in the morning in Vancouver, I’m almost seeing Monday morning, less than 2 hours to go.

Leave a comment

Filed under Finance, IT, Media, Science