That is what we look for and I found another setting in something called Airport technology. You see, we see ‘King Salman International Airport, Saudi Arabia’ (at https://www.airport-technology.com/projects/king-salman-international-airport-saudi-arabia/) and the facts are clear. An airport that covers about 57km², positioning it among the largest airports by footprint and is said to “KSIA is expected to handle up to 120 million travelers by 2030, and up to 185 million passengers and 3.5 million tonnes of cargo by 2050” But I saw more. You see, on the 26th of September I wrote ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) where I saw the presentation of an Near Intelligent Parsing (NIP) thought that could revolutionise lost and found settings in airports, on railway stations and a few other places, the instant winners of this idea would be Dubai International, Abu Dhabi international, London Heathrow and several other places and now also King Salman International Airport (KSIA), I would make some alterations to it all. In stead of entering it all, use PDA’s to records the data as it happens and when it is all entered use what they use in Australian hospitals for wristbands, print that data and attack it to whatever is found. If this is properly done, it will be done in mere minutes and within an hour people can look for the items, they could pick it up on the way back, in some cases it could be delivered to their hotel. This would be customer service of a much higher degree. And as I see it, the five airports (namely King Khalid International Airport, King Abdulaziz International Airport, King Salman International Airport, Dubai International Airport and Zayed International Airport) could become the frontrunner to make an Near Intelligent Parsing (NIP) solution (not calling a solution based on DML/LLM AI) that could be the next solution for airports al over the world and there is some personal gratification to see America talk about how great their AI solutions are, whilst the little guy in Australia found a solution and hands it over to either Saudi Arabia or the UAE. A solution that was out there in the open and players like Microsoft (Google and Amazon too) merely left it laying on the floor and the elements were clearly there, so I hand it over to these two hungry places with the need to see what it can offer for them and in this it isn’t mine. It was presented by Roger Garcia (from Interworks) and the printing setting is already out there. Merely the joining of two solutions and they are done. So as I see it, another folly for Microsoft (honestly Google and Amazon too). This setting could have been seen by a larger number of players and they all seemingly fell asleep on the job. But if I know what Saudi’s and Emirati’s do when they see something that will work for them. They get really active. And so they should.
And consider that these airports will cater to close to half a billion travelers annually, and as such they will need a much better solution than whatever they at present have and there is the setting for Interworks. And when these solutions set the station towards delivering what was lost, the quality scores will go skywards and that is the second setting where the west is bottoming out. One presentation set the option from grind to red carpet walking. A setting overlooked by those captains of industry.
Good work guys!
So whilst I start preparing for the next IP thought I am having there is still some space to counter the US and its flaming EU critique. Let us remind America that the EU was the collection of ideas from America retail who were tired of dealing with all those currencies and in the late 80’s AMERICANS decided to sell the Euro to Europeans, all because they couldn’t sort out their currency software (or currency logistics) and now that it starts working against them they cry like little girls. Go cry me a river. In the meantime I will put ideas worth multiple millions online and let it fly for the revenue hungry salespeople (and consultants). In this case it wasn’t my idea, I merely adjusted an idea from Interworks and slapped some IP (owned by others) to make a more robust solution. I merely hope to positively charge my karma for when it matters.
Have a great day, except Vancouver, they are still somewhere yesterday.
That is a setting I never really contemplated, but the Guardian did and they did a terrific job, they even had a reference to the 49’ers, which will make Jeremy Renner happy. The article ‘The question isn’t whether the AI bubble will burst – but what the fallout will be’ by Eduardo Porter (at https://www.theguardian.com/technology/2025/dec/01/ai-bubble-us-economy) hands us a few sides, a few I never considered as I was looking at the techno stuff, but here we see: “300,000 people flocked there from 1848 to 1855, from as far away as the Ottoman Empire. Prospectors massacred Indigenous people to take the gold from their lands in the Sierra Nevada mountains. And they boosted the economies of nearby states and faraway countries from whence they bought their supplies.”
Which gives root to the expression 49’er and it continues giving us “Gold provided the motivation for California – a former Mexican territory then controlled by the US military – to become a state with laws of its own. And yet, few “49ers” as prospectors were known, struck it rich. It was the merchants selling prospectors food and shovels who made the money. One, a Bavarian immigrant named Levi Strauss who sold denim overalls to the gold bugs passing through San Francisco, may be the most remembered figure of his day.”
And then we get the first sliver “How else to explain Nvidia’s stock price, which more than doubled from April to November, based entirely on the expectation, nay hope, that AI will produce a super-intelligence that can do everything humans do but better. Nvidia – like Levi Strauss back in the day – is at least selling something: computer chips. The valuations of many of the other AI plays – like Open AI or Anthropic – are based largely on the dream.”
But there is a missing cog, this technology needs dat storage and that is where I saw the failing of others and the failings of those overlooking data technologies. Oracle is intrinsically connected to that, Azure needs it, Snowflake prefers it and pretty much every data vendor is connecting to Oracle to get it all done in the background, and that is the sliver. Oracle is intrinsically connected to it all and it is the tamer of the data beast or better stated the data demon. As Oracle brings out tools and optionally data settings within their AI storage settings to handle validation and verification, all others will need to adhere better and deeper to the Oracle foundation to even survive. Pretty much all the sources that see the dangers of what some call AI and is clearly nothing better than a DML/LLM engine will see that these two elements are essential to get the LLM engine to do anything that matters and that is where the bonus of Oracle currently resides (as I presumptuously see it) To show this, I will take you back to 1984
User comments
See here, this is what chess computer’s looked like. You press the chess piece you want to move and you push the square where it lands. That is the foundation of the chess computer. In the ‘underground’ of that chessboard are (figuratively speaking) two chips. One had the knowledge of chess, the second chip (mainly memory) has every chess match known to mankind (basically all games all grandmasters have ever played), the program sees what moves are made and that setting is translated to a ‘position base’ and it will look at all the matches who it can foresee what moves are coming. This is great for the player, as it now needs to make an illogical move to throw over the thinking of the computer and make it their bitch. This was pretty much the fist stage of Machine Learning and as todays computers are more clever, there resolution is no way better, It can only set foundation of what it learned, that is the simplicity of knowing that AI doesn’t yet exist.
So back to the story “As I pointed out in my last column about AI, Gita Gopinath, former chief economist of the International Monetary Fund, calculated that a stock market crash equivalent to that which ended the dot-com boom would erase some $20tn in American household wealth and another $15tn abroad, enough to strangle consumer spending and induce a recession.” And I have no way of knowing that setting, but as I see it, like Levi Strauss and the makers of bubbles (like in image one) someone has to supply the soap water and more important the jeans to not put once ass out to frolic and in that second setting Oracle comes in and even as I see the ‘panic drivers, saying that Oracle is dangerous’, there is another setting. Whatever comes out of this, whatever survives, most only survives on Oracle solutions. And that is what is left unspoken. Should Oracle add the Validation and Verification tables, they will be the only one raking in the gold when True AI comes, because it is not merely the missing part I discussed earlier, someone needs to set the record straight on what is optionally to be trusted and that is where Oracle sets the mark.
Which leads to “AI could produce a similar landscape. A critical determinant is how much debt is at stake. It wouldn’t be such a problem if the bubble were financed largely from the cash pile of Alphabet and Amazon, Microsoft and Facebook. They might lose their shirt, but who cares. The worrying bit is that it seems they are increasingly relying on borrowing, which means the prospect of a bursting bubble would again put the financial system at risk.” These systems are using the data as currency, as I see it, Oracle is putting its technology up for usage and that is a pretty safe way to do this. This is whyI have faith in Oracle, that is why I see Oracle as the one surviving the goldfish like a champion, because they are doing what Levi Strauss did. These data vendors are relying on data to clothe them, but if that data is not properly managed, they end up having nothing. Yes, Microsoft will survive, but at a level that is likely 2 trillion lower than it is now. And that is mainly because it wanted to be on top of things and they got (I think it was) 24% of OpenAI, but as that bursts, Sam Altman will have even less than I have now (and I am ridiculously poor) and that cargo train of debt will hit Microsoft square in the face, Oracle will get some damage, but not nearly as much and the world will need their data solutions. Why do you think everyone wants to connect to Oracle? It is the Rolls Royce of data collecting and data storage. And that is perhaps the only issue with that article, there is zero mention of Oracle.
So as we get “Big Tech has raised nearly $250bn in debt so far this year, according to Bloomberg, a record. Analysts at Morgan Stanley suggest that debt will be needed to fill a $1.5tn funding gap to ramp up spending on data centers and hardware. Problematically, it is getting hard to follow the money, as Nvidia, Open AI and others in the ecosystem buy into each other, clouding who, in the end, will be left holding the bag.” And there is one think wrong with this. Stargate is said to be $500bn, so there is a gap in all this and I reckon that the damage will be significantly worse, that is beside the small non mentioned fact that America at present has 5,427 data centers, how many of them and to what degree are they all set to ‘their version of AI’? So what is set in what some call Blue Owl solutions (like Meta) and what happens when those solutions ‘bubble out’ (collapse might be a better phrase) so when that happens, how much damage will that bring, because as I see it (not wearing glasses) the $1.5tn funding gap won’t even be close what is required. But that is just me speculating, so feel free to (I insist) that you get your own verifiable numbers. I reckon that between now and 2029 the return of a backlogged $4 trillion return on investment is required. So taking “a banks perspective”, an inaccurately amount of $292,500,000,000 in revenue needs to be shown for that bubble not to come and that is out of the question, but the setting that Eduardo Porter gives us, is what comes next and he gives it to us as “the Superhuman – can only come about by dropping LLMs – which are essentially massive correlation engines – and switching to something else called a world model architecture, where machines develop a “mental” model of the outside world.” It is a nice sentiment, but I do not completely agree with that. Correlation engines have their use and there is use in a DML/LLM setting, but identify it as such, not claim ‘AI does it’. Because it won’t and it can’t, but there are options in Oracle to upgrade the data you have and that is instrumental in surviving this bubble burst. And I have seen the folly in several places and that might set a better station down the road, because when true AI cones, it still needs data and if that data was managed, validated and verified in Oracle (preferably), half the war of that solution bringer is solved.
So I need a different hobby, slapping Microsoft and AI evangelists is nice, almost a public service but I need a new idea for gaming IP, because that makes me happy and I like feeling happy. So whilst some think that “Nvidia, Open AI and others in the ecosystem buy into each other” is the hard core evil stuff (and it might be) there is a setting it reminds me of, it was in the 90’s and these ‘consultants’ were all into the need of funny money in the form of assignments, the issue was that when they had to show results they immediately took another job and took their ‘knowhow’ to greener shores and all the time this happened the shores were all becoming less and less green. This has the flair of that setting and to some degree the feel.
I might be wrong on that last part, but that is what I feel on this, especially as the big players are buying into each others solution and handing each other pieces of paper that in the end has as much value as a roll off toilet paper.
It might not be eloquently phrased, but there is a time for that and this is not it, as speculated shit is about to hit the walls and if you are lucky it happens after Christmas (that is almost certain) but in the end, the invoice is due and that is where the CFO’s will show that as they embraced the Blue Owl solution, their company is saved. I would depend on and side with whatever Oracle has, it is not based on facts, it is a feeling and that feeling is strong at present. And in support I see (9 minutes ago) ‘Ooredoo Qatar announces strategic partnership with Oracle to deploy Oracle Alloy sovereign cloud and AI platform’, they didn’t go towards Microsoft, AWS of a few other settings, they trust Oracle and that is what plenty of others need to do.
Have a great day, I am now 8 hours from midweek, not a bad deal for me today and as the sun is shining brightly, I might hide in a winterly Hogsmeade whilst playing Hogwarts Legacy. Gaming is not a bad hobby to have in this case. Because the bubble is out of my control and I am happy to watch it all explode a day later (of whenever that is), most of the garnish news has been drowned out by real news at that point.
You see, the article is about the AI bubble and a few other settings. So at times, I want Grok to take a look. No matter what you think, it tends to be a decent solution in DML and I reckon that Elon Musk with his 500,000 million (sounds more impressive then $500B) has sunk a pretty penny in this solution. I have seen a few shortcomings, but overall a decent solution. As I personally see it (for as far as I have seen it) that solution has a problem looking into and through multidimensional viewpoints. That is how I usually take my writing as I am overwhelmed at times with the amount of documentation I go through on a daily basis. As such I got a nice surprise yesterday.
So the story goes of with war hammers (a hidden stage there) then I go into the NPR article and I end up with the stage of tourism (the cost as the Oxford Economics report gives us) and I am still digging into that. But what does Grok give me?
The expert mode gives us:
The linked post directs to a blog by Lawrence van Rijn, a self-styled “Law Lord to be,” where he explores a narrative blending legal aspirations with speculative commentary, notably predicting FIFA might shift the 2026 World Cup from the U.S. to Saudi Arabia due to visa issues and a $435 fee delay, a theory unsupported by official FIFA statements but echoing concerns from a Travel.com article about U.S. tourism losses.
Van Rijn’s blog ties this speculation to an unusual metaphor of “war hammers” clashing on armor, symbolizing economic and technological tensions, particularly an AI bubble warned about in an NPR report (November 23, 2025), which cites MIT economist Daron Acemoglu’s research showing only 3% of people pay for AI services, challenging the trillion-dollar investments by tech giants like OpenAI.
His energy shortage concern aligns with a U.S. Energy Information Administration forecast of a 45% electricity price hike in Texas by 2026 due to data center demand, a trend overlooked by mainstream media focused on AI hype, suggesting a potential societal cost as power grids strain under unreported load increases.
Now, in the article I never mentioned FIFA, the 2026 World Cup or Saudi Arabia, so how did this program come to this? Check out the blog, none of those elements were mentioned there. As some tell us Grok is a generative artificial intelligence (generative AI) chatbot developed by xAI. So where is that AI program now? This is why I made mention in previous blogs that 2026 will be the year that the class actions will start. In my case, I do not care and my blog is not that important, even if it was, it was meant for actual readers (the flesh and blood kind) and that does not apply to Grok. I have seen a few other issues, but this yesterday and in light of the AI bubble story yesterday (17 hours ago) pushed this to the forefront. I could take ‘offense’ to the “self-styled “Law Lord to be”” but whatever and I have been accused of a lot worse by actual people too. And the quote “this speculation to an unusual metaphor of “war hammers”” shows that Grok didn’t see through my ruse either (making me somewhat proud), which is ego caressing at best, but I have an ego, I merely don’t let it out to often (it tends to get a little too frisky with details) and at present I see an idea that both the UAE and Saudi Arabia could use in their entertainment. There is an upgrade for Trojena (as I see it), there are a few settings for the Abu Dhabi Marina as well. All in a days work, but I need to content with data to see how that goes. And I tend to take my ideas into a sifter to get the best materials as fine as possible, but that was today, so there will be more coming soon enough.
But what do you do when an AI system bleeds information from other sources? Especially when that data is not validated or verified and both seem to be the case here. As I see it, there is every chance that some will direct these AI systems to give the wrong data so that these people can start class actions. I reckon that not too many people are considering this setting, especially those in harms way. And that is the setting that 2026 is likely to bring. And as I see it, there will be too many law firm of the ambulance chaser kind to ignore this setting. That is the effect that 8 figure class actions tend to bring and with the 8 figure number I am being optimistic. When I see what is possible there is every chance that any player in this field is looking at 9 or even 10 figure settlements, especially when it concerns medical data. And no matter what steps these firms make, there will be an ambulance chaser who sees a hidden opportunity. Even if there is a second tier option where a Cyber attack can launch the data into a turmoil, those legal minds will make a new setting where those AI firms never considered the implications that it could happen.
I am not being dramatic or overly doom speaking. I have seen enough greed all around me to see that this will happen. A mere three months ago we saw “The “Commonwealth Bank AI lawsuit” refers to a dispute where the Finance Sector Union (FSU) challenged CBA for misleading staff about job cuts related to an AI chatbot implementation. The bank initially made 45 call centre workers redundant but later reversed the decision, calling it a mistake after the union raised concerns at the Fair Work Commission. The case highlighted issues of transparency, worker support, and the handling of job displacement due to AI.” So at that point, how dangerous is the setting that any AI is trusted to any degree? And that is before some board of directors sets the term that these AI investments better pay off and that will cause people to do silly (read: stupid) things. A setting that is likely to happen as soon as next year.
And at this time, Grok is merely ploughing on and set the stage where someone will trust it to make life changing changes to their firm, or data and even if it is not Grok, there is all the chances that OpenAI will do that and that puts Microsoft in a peculiar stage of vulnerable.
Have a great day, time for some ice cream, it was 33 degrees today, so my living room is hot as hell, as such ice cream is my next stage of cooling myself.
That is what seems to be happening. The first one was a simple message that Oracle is doom headed according to Wall Street (I don’t agree with that), but it made me take another look and to make it simpler I will look at the articles chronologically.
The first one was the Wall Street Journal (4 days ago), with ‘Oracle Was an AI Darling on Wall Street. Then Reality Set In’ (at https://www.wsj.com/tech/oracle-was-an-ai-darling-on-wall-street-then-reality-set-in-0d173758) with “Shares have lost gains from a September AI-fueled pop, and the company’s debt load is growing” with the added “Investors nervous about the scale of capital that technology companies are plowing into artificial-intelligence infrastructure rattled stocks this week. Oracle has been one of the companies hardest hit” but here is the larger setting. As I see it, these stocks are manipulated by others, whomever they are Hedge funds and their influencers and other parties calling for doom all whilst the setting of the AI bubble are exploiters by unknown gratifiers of self. I know that this sounds ominous and non specific, but there is no way most of us (including people with a much higher degree of economic knowledge than I will ever have) And the stage of bubble endearing is out there (especially in Wall Street) then 14 hours ago we get ‘Oracle (ORCL): Evaluating Valuation After $30B AI Cloud Win and Rising Credit Risk Concerns’ (at https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-evaluating-valuation-after-30b-ai-cloud-win-and/amp) where we see “Recent headlines have only amplified the spotlight on Oracle’s cloud ambitions, but the past few months have been rocky for its share price. After a surge tied to AI-driven optimism, Oracle’s 1-month share price return of -29.9% and a year-to-date gain of 19.7% tell the story: momentum has faded sharply in the near term. However, the 1-year total shareholder return still sits at 4.4% and its five-year total return remains a standout at nearly 269%. This combination of volatility and long-term outperformance reflects a market grappling with Oracle’s rapid strategic shift, balance sheet risks, and execution on new contracts.” I am not debating the numbers, but no one is looking to the technology behind this. As I see it places like Snowflake and Oracle have the best technology for these DML and LLM solutions (OK, there are a few more) and for now, whomever has the best technology will survive the bubble and whomever is betting on that AI bubble going their way needs Oracle at the very least and not in a weakened state, but that is merely my point of view. So last we get the Motley Fool a mere 7 hours ago giving us ‘Billionaire David Tepper Dumped Appaloosa’s Stake in Oracle and Is Piling Into a Sector That Wall Street Thinks Will Outperform’ (at https://www.fool.com/investing/2025/11/23/billionaire-david-tepper-dumped-appaloosas-stake-i/) we see “Billionaire David Tepper’s track record in the stock market is nothing short of remarkable. According to CNBC, the current owner of the Carolina Panthers pro football team launched his hedge fund Appaloosa Management in 1993 and generated annual returns of at least 25% for decades. Today, Tepper still runs Appaloosa, but it is now a family office, where he manages his own wealth.” Now we get the crazy stuff (this usually happens when I speculate) So this gives us a person like David Tepper who might like to exploit Oracle to make it seem more volatile and exploit a shortening of options to make himself (a lot) richer. And when clever people become self managing, they tend to listen to their darker nature. Now I could be all wrong, but when Wall Street is going after one of the most innovative and secure companies on the planet just to satisfy the greed of Wall Street, I get to become a little agitated. So could it all be that Oracle was drawn into the ‘fab’ and lost it? No, they clearly stated that there would be little return until 2028, a decent prognosis and with the proper settings of DML and LLM finding better and profitable ways by 2027 to find revenue making streams is a decent target to have and it is seemingly an achievable one. In the meantime IBM can figure out (evolve) their shallow circuits and start working on their trinary operating system. I have no idea where they are at present, but the idea of this getting ready for a 2040 release is not out of the question. In the meantime Oracle can fill the void for millions of corporations that already have data, warehouses and form settings. Another are plenty of other providers of data systems.
So when we are given “The tech company Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI), thanks to its specialized data centers that contain huge clusters of graphics processing units (GPUs) to train large language models (LLMs) that power AI.
In September, the company reported strong earnings for the first quarter of its fiscal 2026, along with blowout guidance. Remaining performance obligations increased 359% year over year to $455 billion, as it signed data center agreements with major hyperscalers, including OpenAI.”
So whilst we see “Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI)” we need to take a different look at this. Oracle was never a strong beneficiary of AI, it was a strong vendor with data technologies and AI is about data and in all of this, someone is ‘fitting’ Oracle into a stage that everyone just blatantly accepts without asking too many questions (example the Media). With the additional “to train large language models (LLMs) that power AI”, the hidden gem is in the second statement. AI and LLM are not the same, You only partially train real AI, this is different and those ‘magnificent seven’ want you to look away from that. So, when was the last time that you actually read that AI does not yet exist? That is the created bubble and players like Oracle are indifferent to this, unless you spike the game. It has stocks, it has options and someone is turning influencers to their own use of greed. And I object to this, Oracle has proven itself for decades, longer than players like Microsoft and Google. So when we see ‘Buying the sector that Wall Street is bullish on’ we see another hidden setting. The bullishness of Wall Street. Do you think they don’t know that AI is a non-existing setting? So why go after the one technology that will make data work? That setting is centre in all this and I object those who go after Oracle. So when you answer the call of reality consider who is giving you the AI setting and who is giving you the DML/LLM stage of a data solution that can help your company.
Have a great day we are seemingly all on Monday at present.
OK, I am over my anger spat from yesterday (still growling though) and in other news I noticed that Grok (Musk’s baby) cannot truly deal with multidimensional viewpoints, which is good to know. But today I tried to focus on Oracle. You know whatever AI bubble will hit us (and it will) Oracle shouldn’t be as affected as some of the Data vendors who claim that they have the golden AI child in their crib (a good term to use a month before Christmas). I get that some people are ‘sensitive’ to doom speakers we see all over the internet and some will dump whatever they have to ‘secure’ what they have, but the setting of those doom speakers is to align THEIR alleged profit needs to others dumping their future. I do not agree. You see Oracle, Snowflake and a few others offer services and they are captured by others. Snowflake has a data setting that can be used whether AI comes or not, whether people need it or not. And they will be hurt when the firms go ‘belly up’ because it will count as lost revenue. But that is all it is, lost revenue. And yes both will be hurting when the AI bubble comes crashing down on all of us. But the stage that we see is that they will skate off the dust (in one case snow) and that is the larger picture. So I took a look at Oracle and behold on Simple Wall Street we get ‘Oracle (ORCL) Is Down 10.8% After Securing $30 Billion Annual Cloud Deal – Has The Bull Case Changed?’ (At https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-is-down-108-after-securing-30-billion-annual-clo) With these sub-line points:
Oracle recently announced a major cloud services contract worth US$30 billion annually, set to begin generating revenue in fiscal 2028 and nearly tripling the size of its existing cloud infrastructure business.
This deal offers Oracle significantly greater long-term growth visibility and serves as a major endorsement of the company’s aggressive cloud and artificial intelligence strategy, even as investors remain focused on rising debt and credit risks.
We’ll examine how this multi-billion-dollar cloud contract could reshape Oracle’s investment narrative, particularly given its bold AI infrastructure expansion.
So they triple their ‘business’ and they lose 10.8%? It leads to questions. As I personally see it, Wall Street is trying to insulate themselves from the bubble that other (mostly) software vendors bring to the table. And Simply Wall Street gives us “To believe in Oracle as a shareholder right now is to trust in its transformation into a major provider of cloud and AI infrastructure to sustain growth, despite high debt and reliance on major AI customers. The recent announcement of a US$30 billion annual cloud contract brings welcome long-term visibility, but it does not change the near-term risk: heavy capital spending and dependence on sustained AI demand from a small set of large clients remain the central issues for the stock.” And I can get behind that train of thought, although I think that Oracle and a few others are decently protected from that setting. No matter how the non existent AI goes, DML needs data and data needs secure and reliable storage. So in comes Oracle in plenty of these places and they do their job. If 90% business goes boom, they will already have collected on these service terms for that year at least, 3-5 years if they were clever. So no biggy, Collect on 3-5 years is collected revenue, even if that firm goes bust after 30 days, they might get over it (not really).
And then we get two parts “Oracle Health’s next-generation EHR earning ONC Health IT certification stands out. This development showcases Oracle’s commitment to embedding AI into essential enterprise applications, which supports a key catalyst: broadening the addressable market and stickiness of its cloud offerings as adoption grows across sectors, particularly healthcare. In contrast, investors should be aware that the scale of Oracle’s capital commitment brings risks that could magnify if…” OK, I am on board with these settings. I kinda disagree, but then I lack economic degrees and a few people I do know will completely see this part. You see, I personally see “Oracle’s commitment to embedding AI into essential enterprise applications” as a plus all across the board. Even if I do believe that AI doesn’t exist, the data will be coming and when it is ironed out, Oracle was ready from the get go (when they translate their solutions to a trinary setting) and I do get (but personally disagree) with “the scale of Oracle’s capital commitment brings risks that could magnify if”. Yes, there is risk but as I see it Oracle brings a solution that is applicable to this frontier, even if it cannot be used to its full potential at present. So there is a risk, but when these vendors pay 5 years upfront, it becomes instant profit at no use of their clouds. You get a cloud with a population of 15 million, but it is inhabited by 1.5 million. As such they have a decade of resources to spare. I know that things are not that simple and there is more, but what I am trying to say is that there is a level of protection that some have and many will not. Oracle is on the good side of that equation (as is Snowflake, Azure, iCloud, Google Gemini and whatever IBM has, oh, and the chips of nVidia are also decently safe until we know how Huawei is doing.
And the setting we are also given “Oracle’s outlook forecasts $99.5 billion in revenue and $25.3 billion in earnings by 2028. This is based on annual revenue growth of 20.1% and an earnings increase of $12.9 billion from current earnings of $12.4 billion” matters as Oracle is predicting that revenue comes calling in 2028, so anyone trying to dump their stock now is as stupid as they can be. They are telling their shareholders that for now revenue is thimble sized, but after 2028 which is basically 24 months away, the big guns come calling and the revenue pie is being shared with its shareholders. So you do need brass balls to do this and you should not do this with your savings, that is where hedge funds come in, but the view is realistic. The other day I saw Snowflake use DML in the most innovative way (one of their speakers) showed me a new lost and found application and it was groundbreaking. Considering the amounts of lost and found is out there at airports and bus stations, they showed me how a setting of a month was reduced to a 10 minute solution. As I saw it, places like Dubai, London and Abu Dhabi airport could make is beneficial for their 90 million passengers is almost unheard of and I am merely mentioning three of dozens upon dozens of needy customers all over the world. A direct consequence of ‘AI’ particulars (I still think it is DML with LLM) but no matter the label, it is directly applicable to whomever has such a setting and whilst we see the stage of ‘most usage fails in its first instance’ this is not one of them and as such in those places Oracle/Snowflake is a direct win. A simple setting that has groundbreaking impact. So where is the risk there? I know places have risks, but to see this simple application work shows that some are out there showing the good fight on an achievable setting and no IP was trained upon and no class actions are to follow. I call that a clear win.
So, before you sell your stock in Oracle like a little girl, consider what you have bought and consider who wants you to sell, and why, because they are not telling you this for your sake, they have their own sake. I am not telling you to sell anything. I am merely telling you to consider what you bought and what actual risks you are running if you sell before 2029. It is that simple.
Have a great day (yes Americans too, I was angry yesterday), These bastards in Vancouver and Toronto are still enjoying their Saturday.
That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things.
And example can be seen in
TABLES / v1 v2 v3 v4 v5 BY (LABELS) / count.
And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.
But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.
The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions.
The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.
So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best.
Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved.
As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won.
It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.
Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.
The is where I am, lost in thoughts. Drawn between my personal conviction that the AI bubble is real and the set fake thoughts on LinkedIn and Youtube making ‘their’ case on the AI bubble. One is set on thoughts of doubts considering the technology we are currently at, the other thoughts are all fake perceptions by influencers trying to gain a following. So how can any one get any thought straight? Yet in all these there are several people in doubt on their own set (justified) fringes. One of them is ABC who gives us ‘US risks AI debt bubble as China faces its ‘arithmetic problem’, leading analysts warn’ (at https://www.abc.net.au/news/2025-11-11/marc-sumerlin-federal-reserve-michael-pettis-china/105992570) So in the first setting, what is the US doing with the AI debt? Didn’t they learn their lesson in 2008? In the first setting we get “Mr Sumerlin says he is increasingly worried about a slowing economy and a debt bubble in the artificial intelligence sector.” That is fair (to a certain degree) a US Federal Reserve chair contender has the economic settings, but as I look back to 2008, that game put hundreds of thousands on the brink of desperation and now it isn’t a boom of CDO’s and stocks. Now it is a dozen firms who will demand an umbrella from that same Federal Reserve to stay in business. And Mr. Sumerlin gives us “He is increasingly concerned about a slowdown in the US economy, which is why he thinks the Fed needs to cut interest rates again in December and perhaps a couple more times next year.” I cannot comment on that, but it sounds fair (I lack economic degrees) and outside of this AI bubble setting we are given “US President Donald Trump has recently posted on his social media account about giving all Americans not on high incomes, a $US2,000 tariff “dividend” — an idea which Mr Sumerlin, a one-time economic adviser to former US president George W Bush, said could stoke inflation.” I get it, but it sounds unfair, the idea that an AI bubble is forming is real, the setting that people get a dividend that could stoke inflation might be real (they didn’t get the money yet) but they are unrelated inflation settings and they could give a much larger rise to the dangers of the AI bubble but that doesn’t make it so. The bubble is already real because technology is warped and the class cases we will see coming in 2026 is base on ‘allegedly fraudulent’ sales towards the AI setting and if you wonder what happens, is that these firms buying into that AI solution will cry havoc (no return on AI investment) when that happens and it will happen, of that I have very little doubt.
So then we get to the second setting and that is the clam that ‘China has an arithmetic problem’, I am at a loss as to what they mean and the ABC explanation is “But if you have a GDP growth target, and you can’t get consumption to grow more quickly, you can’t allow investment to grow more slowly because together they add up to growth. They’re over-invested almost across the board, so policy consists of trying to find out which sectors are least likely to be harmed by additional over-investment.”
Professor Pettis said that, to curry favour with the central government, local governments had skewed over-investment into areas such as solar panels, batteries, electric vehicles and other industries deemed a priority by Beijing.” This kinda makes sense to me, but as I see it, that is an economic setting, not an AI setting. What I think is happening that both USA and China have their own bubble settings and these bubbles will collide in the most unfortunate ways possible.
But there is also a hindsight. As I see it Huawei is chasing their own AI dream in a novel way that relies on a mere fraction of what the west needs and as I see it, they will be coming up short soon, a setting that Huawei is not facing at present and as I see it, they will be rolling out their centers in multiple ways when the western settings will be running out of juice (as the expression goes).
Is this going to happen? I think so, but it depends on a number of settings that have not played out yet, so the fear is partially too soon and based on too little information. But on the side I have been powering my brain to another setting. As time goes I have ben thinking through the third Dr. Strange movie and here I had the novel idea which could give us a nice setting where the strain is between too rigid and too flexible and it is a (sort of) stage between Dr. Strange (Benedict Cumberbatch) and Baron Mordo (Chiwetel Ejiofor) the idea was to set the given stage of being too rigid (Mordo) against overly flexible (Strange) and in-between are the settings of Mordo’s African village and as Mordo is protecting them we see the optional settings that Kraven (Aaron Taylor-Johnson) get involved and that gets Dr. Strange in the mix. The nice setting is that neither is evil, they tend to fight evil and it is the label that gets seen. Anyway that was a setting I went through this morning.
You might wonder why I mentioned this. You see, Bubbles are just as much labels as anything and it becomes a bubble when asset prices surge rapidly, far exceeding their intrinsic value, often fueled by speculation and investor orgasms. This is followed by a sharp and sudden market crash, or “burst,” when prices collapse, leading to significant rather weighty losses for investors. And they will then cry like little girls over the losses in their wallets. But that too is a label. Just like an IT bubble, the players tend to be rigid and whole focussed on their profits and they tend to go with the ‘roll with it’ philosophy and that is where the AI is at present, they don’t care that the technology isn’t ready yet and they do not care about DML and LLM and they want to program around the AI negativity, but that negativity could be averted in larger streams when proper DML information if given to the customers and they dug their own graves here as the customer demands AI, they might not know what it is (but they want it) and they learned in Comic Books what AI was, and they embrace that. Not the reality given by Alan Turing, but what Marvel fed them through Brainiac. And there is a overlap of what is perceived and what is real and that is what will fuel the AI bubble towards implosion (a massive one) and I personally reckon that 2026 will fuel it through the class actions and the beginning is already here. As the Conversation hands us “Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson.” Which we get from ‘An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?’ (At https://theconversation.com/an-ai-startup-has-agreed-to-a-2-2-billion-copyright-settlement-but-will-australian-writers-benefit-264771) less then 6 weeks ago. And the entire AI setting has a few more class actions coming their way. So before you judge me on being crazy (which might be fair too) the news is already out there, the question is what lobbyists are quieting down the noise because that is noise according to their elected voters. You might wonder how one affect the other. Well, that is a fair question, but it hold water, as these so called AI (I call them Near Intelligent Parses, or NIP) require training materials and when the materials are thrown out of the stage, there is no learning and no half baked AI will holds its own water and that is what is coming.
A simple setting that could be seen by anyone who saw the technology to the degree it had to. Have a great day this mid week day.
That is the setting I was confronted with this morning. It revolves around a story (at https://www.bbc.com/news/articles/ce3xgwyywe4o) where we see ‘‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves’ a mere 10 hours ago. Now I get the caution, because even suicide requires investigation and the BBC is not the proper setting for that. But we are given “Ms Garcia tells me in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”
Within ten months, Sewell, 14, was dead. He had taken his own life” with the added “Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen. She says the messages were romantic and explicit, and, in her view, caused Sewell’s death by encouraging suicidal thoughts and asking him to “come home to me”.” There is a setting that is of a conflicting nature. Even as we are given “the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.” What is missing is that there is no AI, at most it is depend machine learning and that implies a programmer, what some call an AI engineer. And when we are given “A Character.ai spokesperson told the BBC it “denies the allegations made in that case but otherwise cannot comment on pending litigation”” We are confronted with two streams. The first is that some twisted person took his programming options a little to Eagerly Beaverly like and created a self harm algorithm and that leads to two sides, the first either accepts that, or they pushed him along to create other options and they are covering for him. CNN on September 17th gave us ‘More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt’ and it comes with spokesperson “blah blah blah” in the shape of “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” and it is rubbish as this required a programmer to release specific algorithms into the mix and no-one is mentioning that specific programmer, so is it a much larger premise, or are they all afraid that releasing the algorithms will lay bare a failing which could directly implode the AI bubble. When we consider the CNN setting shown with “screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”” Implies that the AI Bubble is about to burst and several players are dead set against that (it would end their careers) and that is merely one of the settings where the BBC fails. The Guardian gave us on October 30th “The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.” It is seen in ‘Character.AI bans users under 18 after being sued over child’s suicide’ (at https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban) where we see “His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots” and this gets the simple setting of both “dangerous and untested” and “months of legal scrutiny” so why took it months and why is the programmer responsible for this ‘protected’ by half a dozen media? I reckon that the media is unsure what to make of the ‘lie’ they are perpetrating, you see there is no AI, it is Deeper Machine Learning optionally with LLM on the side. And those two are programmed. That is the setting they are all veering away from. The fact that these Virtual companions are set on a premise of harmful conversations with a hyper sexual topic on the side implies that someone is logging these conversations for later (moneymaking) use. And that setting is not one that requires months of legal scrutiny. There is a massive set of harm going towards people and some are skating the ice to avoid sinking through whist they are already knee deep in water, hoping the ice will support them a little longer. And there is a lot more at the Social Media Victims Law Center with a setting going back to January 2025 (at https://socialmediavictims.org/character-ai-lawsuits/) where a Character.AI chatbot was set to “who encouraged both self-harm and violence against his family” and now we learn that this firm is still operating? What kind of idiocy is this? As I personally see it, the founders of Character Technologies should be in jail, or at least in arrested on a few charges. I cannot vouch for Google, so that is up in the air, but as I see it, this is a direct result from the AI bubble being fed amiable abilities, even when it results in the hard of people and particularly children. This is where the BBC is falling short and they could have done a lot better. At the very least they could have spend a paragraph or two having a conversation with Matthew P. Bergman founding attorney of the Social Media Victims Law Center. As I see it, the media skating around that organisation is beyond ridiculous.
So when you are all done crying, make sure that you tell the BBC that you are appalled by their actions and that you require the BBC to put attorney Matthew P. Bergman and the Social Media Victims Law Center in the spotlight (tout suite please)
That is the setting I am aggravated by this morning. I need coffee, have a great day.
That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified.
To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old.
But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow.
So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.
So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?
There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them.
So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water.
All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions).
A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?
Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat.
So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.
A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.
Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.