That is what we look for and I found another setting in something called Airport technology. You see, we see ‘King Salman International Airport, Saudi Arabia’ (at https://www.airport-technology.com/projects/king-salman-international-airport-saudi-arabia/) and the facts are clear. An airport that covers about 57km², positioning it among the largest airports by footprint and is said to “KSIA is expected to handle up to 120 million travelers by 2030, and up to 185 million passengers and 3.5 million tonnes of cargo by 2050” But I saw more. You see, on the 26th of September I wrote ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) where I saw the presentation of an Near Intelligent Parsing (NIP) thought that could revolutionise lost and found settings in airports, on railway stations and a few other places, the instant winners of this idea would be Dubai International, Abu Dhabi international, London Heathrow and several other places and now also King Salman International Airport (KSIA), I would make some alterations to it all. In stead of entering it all, use PDA’s to records the data as it happens and when it is all entered use what they use in Australian hospitals for wristbands, print that data and attack it to whatever is found. If this is properly done, it will be done in mere minutes and within an hour people can look for the items, they could pick it up on the way back, in some cases it could be delivered to their hotel. This would be customer service of a much higher degree. And as I see it, the five airports (namely King Khalid International Airport, King Abdulaziz International Airport, King Salman International Airport, Dubai International Airport and Zayed International Airport) could become the frontrunner to make an Near Intelligent Parsing (NIP) solution (not calling a solution based on DML/LLM AI) that could be the next solution for airports al over the world and there is some personal gratification to see America talk about how great their AI solutions are, whilst the little guy in Australia found a solution and hands it over to either Saudi Arabia or the UAE. A solution that was out there in the open and players like Microsoft (Google and Amazon too) merely left it laying on the floor and the elements were clearly there, so I hand it over to these two hungry places with the need to see what it can offer for them and in this it isn’t mine. It was presented by Roger Garcia (from Interworks) and the printing setting is already out there. Merely the joining of two solutions and they are done. So as I see it, another folly for Microsoft (honestly Google and Amazon too). This setting could have been seen by a larger number of players and they all seemingly fell asleep on the job. But if I know what Saudi’s and Emirati’s do when they see something that will work for them. They get really active. And so they should.
And consider that these airports will cater to close to half a billion travelers annually, and as such they will need a much better solution than whatever they at present have and there is the setting for Interworks. And when these solutions set the station towards delivering what was lost, the quality scores will go skywards and that is the second setting where the west is bottoming out. One presentation set the option from grind to red carpet walking. A setting overlooked by those captains of industry.
Good work guys!
So whilst I start preparing for the next IP thought I am having there is still some space to counter the US and its flaming EU critique. Let us remind America that the EU was the collection of ideas from America retail who were tired of dealing with all those currencies and in the late 80’s AMERICANS decided to sell the Euro to Europeans, all because they couldn’t sort out their currency software (or currency logistics) and now that it starts working against them they cry like little girls. Go cry me a river. In the meantime I will put ideas worth multiple millions online and let it fly for the revenue hungry salespeople (and consultants). In this case it wasn’t my idea, I merely adjusted an idea from Interworks and slapped some IP (owned by others) to make a more robust solution. I merely hope to positively charge my karma for when it matters.
Have a great day, except Vancouver, they are still somewhere yesterday.
That is what hit me when I saw ‘How I Learned to Stop Worrying and Love the Bubble’ (source: Bloomberg) which comes from Dr Strangelove where we get “How I Learned to Stop Worrying and Love the Bomb” it started a larger set of thoughts.
I didn’t use that article as Bloomberg uses a paywall. And it starts with yesterdays article in FXLeaders (at https://www.fxleaders.com/news/2025/12/07/oracles-ai-bubble-bursts-peak-glory-at-345-now-a-217-hangover/) where we see ‘Oracle’s AI Bubble Bursts: Peak Glory at $345, Now a $217 Hangover’ we are given “ORCL ended the week at $217.58, up 1.52 percent, but it still had a 37 percent hangover from its 52-week high of $345.72. This is a microcosm of growing concerns about debt loads, AI infrastructure spending, and whether the “infinite demand” narrative for AI compute can withstand real-world economics.” As well as “Oracle’s recent decline in stock value reflects broader market concerns regarding the high valuations of AI-related companies, as its forward price-to-earnings (P/E) ratio exceeds 33. The company projects revenues of $166 billion from cloud infrastructure and $20 billion. Investors adopted a “sell the news” mentality, raising questions about the sustainability of these forecasts. Oracle’s fundamentals remain solid. The company experienced 52% growth in cloud infrastructure and has $455 billion in remaining performance obligations (RPO), largely due to its partnership with OpenAI. Currently, the stock is trading at 13.9 times projected earnings for the end of this decade, leading some investors to view the decline as a potential buying opportunity.”
As I see it Oracle passed their burst bubble setting. And whilst we see ups and downs, I would unreservedly trust the Oracle stock to be a beacon of steadiness. It might not be sexy, but it is a trustworthy sign for those who need a decent return on investment.
Or as Peter sellers would say: “As long as the roots are not severed, all is well. And all will be well in the garden. Yes! There will be growth in the spring!” (Source: Being there) it was a better time and weirdly enough the age of Peter Sellers applies to the days that 2025 brings. And from that setting we get to MyNews (at https://sc.mp/ihj4g) where we see ‘Why 2026 will be the year AI hype collides with reality’ an opinion piece that gives me “The reckoning ahead for the AI bubble promises to reprice expectations, force economic trade-offs and call out circular deals” but the stronger setting is given with “Speculative assumptions guiding trillions of US dollars in AI investments are colliding with real-world obstacles. Escalating costs, stratospheric stock valuations, tenuous collaborations and energy bottlenecks are compounding the inevitable challenges when new technologies struggle for profitability. Many are worried the bubble may be bursting. Morgan Stanley projects that the cumulative amount spent worldwide on data centers could exceed US$3 trillion by year-end 2028. China’s AI investment could hit 700 billion yuan (US$99 billion) this year, 48 per cent more than last year, according to Bank of America, with the government supplying US$56 billion.” There is a setting for both ‘AI investments are colliding with real-world obstacles’ and ‘worldwide on data centers could exceed US$3 trillion by year-end 2028’ the weird feeling I have that it will not get this far, this entire setting will implode before the end of 2027, investors will stop feeling lovingly towards the boom that is not coming and will start feeling pressured that the terms required that will grow erratic setting for the need for greed and that is the setting that comes along long before 2027 is reached.
Then we get to AOL who gives us (at https://www.aol.com/finance/goldman-sachs-issues-warning-ai-103249744.html) where we are given ‘Goldman Sachs issues a warning to AI stock investors’ where we are given ““Our discussions with investors and recent equity performance reveal limited appetite for companies with potential AI-enabled revenues as investors grapple with whether AI is a threat or opportunity for many companies. While we expect the AI trade will eventually transition to Phase 3, investors will likely require evidence of a tangible impact on near-term earnings to embrace these stocks. Unlike Phase 2, there will likely be winners and losers within Phase 3,” Goldman Sachs US equity strategist Ryan Hammond wrote in a new note on Friday. Hammond thinks AI investment as a percentage of capital expenditures could be nearing a climax. In turn, that sets the stage for overly upbeat AI investors to be let down if earnings don’t come in strongly in future quarters.” As I see it, when we are given these settings everyone seems to get concerned, so when we get in addition “Salesforce (CRM) and Figma (FIG) got drilled on Thursday after their earnings reports didn’t wow. It’s clear that the hype on their earnings calls wasn’t enough to paper over soft areas of the earnings reports. Growing concern on the Street centers around the pace of AI demand by corporations, given what looks to be a slowing US economy.” As I stated this before, the need for greed overwhelmed everything. When the setting of NIP (Near Intelligent Parsing) is not clearly laid out and it is caught in the waves of board of directors and Investors believing that they have the AI solution everyone is looking for you gets a larger setting, consider that and consider what happens when OpenAI “fails to wow” the investors, or even a delay and it all comes to a large shutdown and that is even before we see 9 News giving us “A Sydney data centre that will host ChatGPT is being hailed as a win for Australia, but an expert warns the country lacks the energy supply needed to power it reliably” I gave a few months ago that there would be an energy problem on numerous levels and now we are seeing that whilst we are dealing with the the fallout of other settings. And less than an hour ago Deutsche Welle gives us ‘Google raises AI stakes as OpenAI struggles to stay on top’ with “Given those strengths, Adrian Cox sees “a very high probability” Google will have the leading model at least into next year — not OpenAI. OpenAI’s priority, he says, is identifying a business model capable of funding a user base that could soon approach a billion people per week.” This is not about OpenAI, I did that already, the larger frame is set in the perception of whatever the bubble is and I believe that there are two factors that the media doesn’t want or is avoiding to include. First there are the doom sayers trying to early burst confidence in favor of short gains and then there are people trying to short on whatever they can so that they can get another jolt of profit and they are all out trying to set social media on their side.
So if this is the prologue of what is about to unfold we are in for a jolly good time, and as I see it, there is a chance that Christmas for some will be a disaster.
I wanted to include more of Peter sellers, like the Party or the Pink Panther but I am running out of juice. But there was one more thing and I got it from the Independent about an hour ago. It states ‘OpenAI rushes out new AI model in ‘code red’ response to fears about Google’ (at https://ca.news.yahoo.com/openai-rushes-ai-model-code-105822611.html) that was the snippet I was hoping for. With “The ChatGPT creator will unveil GPT-5.2 this week, The Verge reported, after OpenAI CEO Sam Altman declared a “code red” situation following the launch of Google Gemini 3 last month. Google’s latest AI model surpassed ChatGPT in several benchmark tests, including abstract and visual reasoning, as well as advanced knowledge across scientific disciplines.” But that comes in a setting, you see, I stated in ‘TBD CEO OpenAI’ two days ago (at https://lawlordtobe.com/2025/12/06/tbd-ceo-openai/) “in a software release any of a hundred things can go wrong and they all need to go right at present.” And when things are rushed out things will go wrong. But there is a snag, for this to happen The Independent article had to be correct and as they are the only one giving us this, there is no real verification available. But when you are in a stage when bubbles go boom (or plop) all the available facts become important. And I massively wish that a Peter sellers setting would help me out. And perhaps in view of this, his classic phrase “It’s no matter. When you’ve seen one Stradivarius, you’ve seen them all.” Especially when looking at NIP software. But that is also the snag. I have seen excellent applications and I have seen lesser ones. I reckon that it amounts to who plays the violin, if it is a creative person that person will find new life in whatever that person. applies NIP to, if it is a salesperson it will be about maximizing greed and that setting tends to have limitations on several degrees. In addition we are given “The new model was originally scheduled to launch in late December, but will now be released as early as 9 December.” I understand the pressures that come with this but they better understand that early launch bring dangers and investors don’t really like to be spooked (they also don’t like them) What we see is open to interpretation and it is a valid thought that my views are also open to interpretation.
So in this I leave you all with a presenting view not unlike Peter sellers would say “To see me as a person on screen would be one of the dullest experiences you could ever wish to experience” and
As you I have never been in a movie (at least I don’t remember being in one) you are spared that dull experience. So have a great day and don’t forget to love the bubble (if you haven’t invested your wealth there).
That is the thought I had, yesterday, 5 hours after I wrote my piece, I still saw the news appear all over the media, some on it was getting a ridiculous amount of attention, so I decided to take another look at some of this. First there was the Business insider (at https://www.businessinsider.com/openai-code-red-chatgpt-advertising-google-search-gemini-2025-12) giving us ‘OpenAI’s Code Red: Protect the loop, delay the loot’ where we see “Focus on improving ChatGPT, and pause lower-priority initiatives. The most striking pause is advertising. Why delay such a lucrative opportunity at a moment when OpenAI’s finances face intense scrutiny? Because in tech, nothing matters more than users.” This was followed by “Every query and click fed a feedback loop: user behavior informed ranking systems, which improved results, which attracted more users. Over time, that loop became an impenetrable moat. Competing with it has proven nearly impossible.
ChatGPT occupies a similar position for AI assistants. Nearly a billion people now interact with it weekly, giving OpenAI an unmatched new window into human intent, curiosity, and decision-making. Each prompt and reply can be fed back into model training, evaluations, and reinforcement learning to strengthen what is arguably the world’s most powerful AI feedback loop.” All this makes sense, it comes with the nearly mandatory “Google’s Gemini 3 rollout has lured new users. If ChatGPT’s quality slips or feels cluttered, defecting to Google becomes easier. Introducing ads now risks exactly that. Even mildly irritated users could view ads as one annoyance too many.” Whilst in the background we are ‘sensitive’ to “OpenAI has already committed to spending hundreds of billions of dollars on infrastructure to serve ChatGPT at a global scale. At some point, those bills will force the company to monetize more aggressively.
If OpenAI manages to build even half of Google’s Search ads business in an AI-native form, it could generate roughly $50 billion in annual profit. That’s one way to fund its colossal ambitions.” This gives OpenAI a two sided blade in the back. It was a good ploy, but that ploy is deemed to be counter productive and I get that, but dropping the ads might sting with the investors as It was the dimes that they were seeing coming their way and ChatGPT needs to make a smooth entry all the way to the next update, which will be near impossible to avoid in several ways. Google has the inside track now and whilst there are a few settings that are ‘malleable’ for the users, the smooth look is essential for ChatGPT to continue. And that is before other start looking at the low quality data it verifies against. Google has, as I see it, exactly the same problem, but as I see it, ChatGPT gets it now in advance.
Newcomer (at https://www.newcomer.co/p/openais-code-red-shows-the-power) gives us “In truth, as Newcomer’s Tom Dotan wrote back in April, Google, with all of its formidable assets, was never very far behind. Nor is it currently very far ahead. Anthropic too has always been essentially neck-and-neck with OpenAI on the core technology. The capabilities of the big foundation models, and even some lighter ones like DeepSeek, are broadly similar. Marc Benioff, himself a skilled practitioner in the arts of attention, even claimed this week that the big models will be interchangeable commodities, like disk drives. Yet the perception of who’s on top matters quite a lot at a moment when consumers, enterprise technology buyers, and investors are all deciding where to place some highly consequential long-term bets. That brings us back to Altman’s “Code Red.”” Is a truth in itself, but the next part “while the alarm came in a company-wide memo that wasn’t officially announced publicly, we can stipulate that the “leak” of the memo, if not necessarily orchestrated, was almost certainly part of the plan. A media maestro like Altman surely knew that a memo going out to thousands of employees with charged language like “Code Red” was all but guaranteed to make its way to the press. Publicizing a panicked internal reaction to a competitor’s new product might seem like a counter-intuitive way to maintain your reputation as the industry leader.” As I see it, someone in Microsoft marketing earned his dollars in marketing that day, but this is a personal feeling, I have no data to back it up. It is now up to Sam Altman to deliver his ‘new’ version in the coming week and it better the a great new release, or as I see it, there will be heads rolling all over the floor and Sam Altman knows that the pressure is up. I don’t think he is scared as some media says, but he is definitely worried, because this setting will set the record of $13 billion straight, into or away from Microsoft and Sam Altman knows this, as such he is probably a little worried and in a software release any of a hundred things can go wrong and they all need to go right at present.
Then we get “Altman and OpenAI are so good at making news that it’s sometimes hard to tell what’s real.” So, isn’t that the setting all the time? I have always seen Sam Altman as a bad second hands car salesman, That is my take, but I have had a healthy disgust for salespeople for over 30 years. I am a service person, Technical support, customer support. That was always my field. I am not against sales, merely against cleaning up their messes. At times this comes with the territory, shit happens, but those salespeople overselling something just so that they can fill their pipeline and make their numbers are not acceptable to me. To illustrate this, A little setting (devoid of names and brands) “A salesperson came to me with what he needed. We could not do that and I told him, so off he goes calling every technical support person on the planet until he found one that agreed with him and then he sold the solution to the customer and hung that persona name on this. I had to clean up the mess and set up a credit invoice, but after I went through the whole 9 yards making it over 30 days ensuring him that he kept his commission” that is the type I am disgusted with because the brands as a whole suffers, all for the need of greed. It is short sighted thinking. I goes nowhere, but his monthly revenue was guaranteed. And I feel that Sam Altman is not completely like that, but it is the ‘offset’ of salespeople that I carry within me. For me protecting the product and the customer are first and foremost on my mind.
Then we get Futurism (at https://futurism.com/artificial-intelligence/openai-is-suddenly-in-major-trouble) where we see ‘OpenAI Is Suddenly in Major Trouble’ OK, is this true? We are given “The financial stakes are almost comical in their magnitude: The company is lighting billions of dollars on fire, with no end in sight; it’s committed to spending well over $1 trillion over the next several years while simultaneously losing a staggering sum each quarter. And revenues are lagging far behind, with the vast majority of ChatGPT users balking at the idea of paying for a subscription.” I don’t agree with this setting. You either pay, or you see advertisement that is the setting. There are no free rides and the sooner you realise this, the easier this gets. Then we are given “Meanwhile, Google has made major strides, quickly catching up with OpenAI’s claimed 800 million or so weekly active ChatGPT users as of September. Worse yet, Google is far better positioned to turn generative AI into a viable business — all while minting a comfortable $30 billion in profit each quarter, as the Washington Post points out.” I agree with the setting the Washington Post sets out with and Google does have an advantage, but that is still relying on the fact that Sam Altman does not get his new version seen as stellar in the coming week. He still has a much larger issue, but that is for later. All this comes at the price of being in the frontrunner team. Easy does it, there is no other way and the stakes are set rather high. So then we are given “In a Thursday note, Deutsche Bank analyst Jim Reid estimated staggering losses for OpenAI amounting to $140 billion between 2024 and 2029.” This is probably true, but where are the numbers. $140 billion over 5 years is one, but what revenue is set against it? Because if this is still set against a revenue number that OpenAI keeps making they are going decently sweet, the numbers were never in debate, the return on investment was and these stakes are high and there is no debating that, these numbers are either given or they are not.
Then we are given something that makes sense ““OpenAI may continue to attract significant funding and could ultimately develop products that generate substantial profits and revolutionize the world,” he wrote, as quoted by WaPo. “But at present, no start-up in history has operated with expected losses on anything approaching this scale.” “We are firmly in uncharted territory,” Reid added.” I agree, in several ways, but the revenue is not given as such the real deal is absent. Consider YouTube, did anyone see the upside of a $1.65 billion acquisition 20 years ago? It now generates $36.1 billion in annual revenue (2024), Microsoft and OpenAI are banking on that same setting and Microsoft needs it to get a quality replacement for Clippy and they are banking on ChatGPT, this will only happen if they win over Google and I have my doubts on this. There is no real evidence because the new version isn’t ready yet, but it really needs one hitch to make it all burn down and Altman knows this. The numbers or better, the statistics are not on his side. And as I haven’t see a decent software price fight for a while, so I am keeping my thumbs up for Altman (I am however a through and through Google guy). This is a worthy fight watching and I am wondering how this might evolves over the next week.
The stakes are high, the challenge is high, lets see if Sam Altman rises to the occasion. It’s almost Sunday for me so have a great day you all, I reckon that Ryan Reynolds is about 6 hours from breakfast in Vancouver now.
There is a setting we at times ignore. When so called ‘important’ people hide behind movie settings like Sam Altman is when he calls for ‘Code Red’ (at https://www.theguardian.com/technology/2025/dec/02/sam-altman-issues-code-red-at-openai-as-chatgpt-contends-with-rivals) I tend to get frisky and a little stir crazy, but as we see the Guardian, we are given “According to a report by tech news site the Information, the chief executive of the San Francisco-based startup told staff in an internal memo: “We are at a critical time for ChatGPT.”
OpenAI has been rattled by the success of Google’s latest AI model, Gemini 3, and is devoting more internal resources to improving ChatGPT. Last month, Altman told employees that the launch of Gemini 3, which has outperformed rivals on various benchmarks, could create “temporary economic headwinds” for the company. He added: “I expect the vibes out there to be rough for a bit.”” So after all the presentations and the posturing by OpenAI’s CEO Sam Altman, we are now confronted that the CEO of Google, Sundar Pichai smirking and devouring a Beef Vindaloo with naan bread casually passed Sam Altman by and overtook his setting of ChatGPT with Gemini 3.
We are given “Marc Benioff, the chief executive of the $220bn (£166bn) software group Salesforce, wrote last month that he had switched allegiance to Gemini 3 and was “not going back” after trying Google’s latest AI release. “I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane – reasoning, speed, images, video … everything is sharper and faster. It feels like the world just changed, again,” he wrote on X.” And if a BI guy like Marc Benioff makes that jump, a lot of others will do too and that is what is truly frightening to Microsoft who owns a little below 30% of all this, it is nice to have a DML solution that has a population of zero, OK, not zero but ridiculously small because as ever (and not surprising) Google is showing his brilliance and overtook the wannabe.
So whilst Sam Altman decided that he was the next Elon Musk we see (at https://gizmodo.com/sam-altman-wants-his-own-rocket-company-2000695680) that ‘Sam Altman Wants His Own Rocket Company’ and we see here “Altman was reportedly considering investing billions into Stoke Space, a Seattle-based startup that’s developing a reusable rocket, to gain a controlling stake in the company, according to The Wall Street Journal. The talks between Altman and Stoke took place over the summer and picked up in the fall. Although no deal has been made yet, Altman intended on either buying or partnering with a rocket company so that he would be able to deploy AI data centers to space.” So whilst Sammy the Oldman, sorry Sam Altman was turning his focus towards space Sundar Pichai surpassed him in the DML field because Sundar, beside his need for Beef Vindaloo was seemingly focussed on the Data matters of Google, allegedly not with his head in space.
And now we see (at https://futurism.com/artificial-intelligence/sam-altman-code-red) that ‘Sam Altman Is Suddenly Terrified’ and now we are given “The all-out brawl that followed in the subsequent years, with AI companies trying to outdo each other with their own offerings as investors threw tens of billions of dollars at the tech, has shifted the dynamics considerably.
And now, the tables have officially turned: OpenAI CEO Sam Altman has declared his own “code red” in a memo to employees this week, as the Wall Street Journal reports, urging staffers to improve the quality of the company’s blockbuster chatbot, even at the cost of delaying other projects.” So as I see it, Sam Altman was ready to be the next rockstar of Microsoft surpassing all others, but Google (say Sundar Pichai) had been sitting on a throne for the better part of two decades, they had relented the Console war (their Google Stadia) towards Amazon with the Amazon Luna. And that might have been a sore loss. So when another ‘upstart’ comes with a great idea, Google recounts and Gemini was the result, or that is at least how I see it. And by the time version three was ready, Gemini was back in the lead or so they say.
So now Sam Altman is in a bind, he needs to evolve ChatGPT and that might have been be in what some call a pickle, so whilst Sam Altman was looking at the sky, Google took the time to overtake Sam Altman with Gemini 3. And now the storm has reached the shores of the financial industry. Now Microsoft is in a pickle, because the OpenAI is now due to the investment marked the start of a partnership between the cloud computing firm and the AI research company that has since grown to more than US$13bn in total commitments. Microsoft and OpenAI are bound to ChatGPT to the nihilistic setting of these firms losing 13 billion in value, so when that happens, what more will unfold? I am not stating that this will burst the AI bubble, but as I see it Sam Altman will see his halo decrease looking a lot like a zero, and Microsoft sees the tally of failures increase to two, first builder.ai, now we see that Microsoft is surpassed again by Google, which is not a great surprise to me.
And as Futurism gives us “Google, though, has a major financial advantage by already being profitable. It can afford to spend aggressively on data centers, at least for the time being. That’s besides Google Search having been the de facto search engine on the internet for decades, giving it access to a vast number of existing users who could be swayed by its AI offerings.
Altman claimed in the memo that the company has an ace up its sleeve in the form of an even more powerful reasoning model that’s set to be released as early as next week, according to the WSJ, likely a direct response to Google’s Gemini 3.” So is this a simple setting of a little time gap, or is OpenAI now in more trouble than anyone think it is? I actually do not know, but there is a setting that I personally like. I was always Google minded. I was struck in my soul when they dropped the Google Stadia as I had a plan to give it 50,000,000 subscriptions in stage one and rally add to that beyond that, knocking Microsoft of its illusionary perch. But alas, it was not to be and Amazon had the inside track from that point inwards. And I personally feel that the stage of “to be released as early as next week” is likely want-to-be-real presentation, Sam Altman is trying to get any moment he can get and that is fine, but as I see it, it might be timing and people like Sam Altman will try to get any way to keep their cushy setting. I am not judging, but the stage that Gemini 3 is surpassed is likely, will it be? I doubt it, using the words from Marc Benioff stating “not going back” and that is a powerful setting, one that creeps fear into the hearts of Sam Altman and Satya Nadella as I personally see it.
Have a great day, my weekend has begun and Vancouver will join us in 15 hours.
It is a specific sound, nothing compares to that and it isn’t entirely fictional. Some might remember the Walter Hill movie Streets of Fire (1984) where two men slug it out with hammers, but that is not it. When a Warhammer slams into metal armor, the armor becomes a drum and that sound is heard all over the battlefield (the wearer of that armour hears a lot more than that sound) but is distinct and I reckon that some of those hammer wielders would have created some kind of crescendo on these knights. So that was ‘ringing’ in my ears when NPR gave us ‘Here’s why concerns about an AI bubble are bigger than ever’ a few days ago (at https://www.npr.org/2025/11/23/nx-s1-5615410/ai-bubble-nvidia-openai-revenue-bust-data-centers) and what will you know. They made the same mistake, but we’ll get to that.
The article reads quite nicely and Bobby Allyn did a good job (beside the one miss) but lets get to the starting blocks. It starts with “A frothy time for Huang, to be sure, which makes it all the more understandable why his first statement to investors on a recent earnings call was an attempt to deflate bubble fears. “There’s been a lot of talk about an AI bubble,” he told shareholders. “From our vantage point, we see something very different.”” So then we get three different names all giving ‘their’ point of view with ““The idea that we’re going to have a demand problem five years from now, to me, seems quite absurd,” said prominent Silicon Valley investor Ben Horowitz, adding: “if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.” Appearing on CNBC, JPMorgan Chase executive Mary Callahan Erdoes said calling the amount of money rushing into AI right now a bubble is “a crazy concept,” declaring that “we are on the precipice of a major, major revolution in a way that companies operate.” Yet a look under the hood of what’s really going on right now in the AI industry is enough to deliver serious doubt, said Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy.” All three names give a nice ‘presentation’ to appease the rumblings within an investor setting. Ben Horowitz, Mary Callahan Erdoes and Paul Kedrosky are seemingly mindset on raking in whatever they can and then the fourth shines a light on this (not in the way he intended) we see “Take OpenAI, the ChatGPT maker that set off the AI race in late 2022. Its CEO Sam Altman has said the company is making $20 billion in revenue a year, and it plans to spend $1.4 trillion on data centers over the next eight years. That growth, of course, would rely on ever-ballooning sales from more and more people and businesses purchasing its AI services.” Did you see the setting. He is making 20 billion and investing $1.4 trillion, now that represents a larger slice and the 20 billion is likely to make more (perhaps even 100 billion a year. And now the sides of hammers are slamming into armour. That still will take 14 years to break even and does anyone have any idea how long 14 years is and I reckon that $1.4 trillion represents (at 4.5%) implies that the interest is $63,000,000,000. That is almost the a year of revenue and that is the hopefully glare if he is making 100 billion a year. So what gives with this, because at some point investors make the setting that the formula is off. There is no tax deductibility. That is money that is due, the banks will get their dividend and whomever thinks that all this goes at zero percent is ludicrously asleep and that is before the missing element comes out.
So then in comes Daron Acemoglu with “A growing body of research indicates most firms are not seeing chatbots affect their bottom lines, and just 3% of people pay for AI, according to one analysis. “These models are being hyped up, and we’re investing more than we should,” said Daron Acemoglu, an economist at MIT, who was awarded the 2024 Nobel Memorial Prize in Economic Sciences.” He comes at this from another angle and gives us that we are investing more than we should. All these firms are seeing the pot at the end of the rainbow, but there is the hidden snag, we learned early in life that the rainbow is the result of sunlight on rainwater and it is always curves t be ‘just’ beyond the horizon and it never hits the ground and there will be no pot of gold at the end of it according to Lucky the Leprechaun (I have his fax number) but that was not the side I am aiming for, but it gives the idiocy we see at present. They are all investing too much into something that does not yet exist, but that is beside the point. There are massive options for DML and LLM solutions, but do you think that this is worth trillions? It follows when we get to “Nonetheless, Amazon, Google, Meta and Microsoft are set to collectively sink around $400 billion on AI this year, mostly for funding data centers. Some of the companies are set to devote about 50% of their current cash flow to data center construction.
Or to put it another way: every iPhone user on earth would have to pay more than $250 to pay for that amount of spending. “That’s not going to happen,” Kedrosky said.” This comes from Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy, and he is right. But that too is not the angle I am going for. But there are two voices, both in their field of vision, something they know and they are seeing the edges of what cannot be contained, one even got a Nobel Memorial Prize for his efforts (past accomplishment) And I reckon all these howling bitches want their government to ‘safe’ them when the bough breaks on these waves. So Andy Jassy, Sundar Pichai, Mark Zuckerberg and Satya Nadella (Amazon, Google, Meta and Microsoft) will expect the tax system to bail them out and there is no real danger to them, they might get fired but they’ll survive this. Andy Jassy is as far as I know the poorest of the lot and he has 500 million, so he will survive in whatever place he has. But that is the danger. The investors and the taxpayers (you and me) get to suffer from this greed filled frenzy.
But then we get “Analyst Gil Luria of the D.A. Davidson investment firm, who has been tracking Big Tech’s data center boom, said some of the financial maneuvers Silicon Valley is making are structured to keep the appearance of debt off of balance sheets, using what’s known as “special purpose vehicles.””, as well as “The tech firm makes an investment in the data center, outside investors put up most of the cash, then the special purpose vehicle borrows money to buy the chips that are inside the data centers. The tech company gets the benefit of the increased computing capacity but it doesn’t weigh down the company’s balance sheet with debt.” And here we get another failure. It is the failure of the current administration that does not adapt the tax laws to shore up whatever they have for whatever no one has and that is the larger stakeholder in this. We get this in an example in the article stating “Blue Owl Capital and Meta for a data center in Louisiana”, this is only part of the equation. You see, they are ’spreading the love’ around because that is the ‘safe’ setting and they know what comes next. You see the Verge gave us ‘Nvidia says some AI GPUs are ‘sold out,’ grows data center business by $10B in just three months’ (at https://www.theverge.com/tech/824111/nvidia-q3-2026-earnings-data-center-revenue) and that is the first part of the equation. What do you think will power all this? That is the angle I am holding onto. All these data centers will need energy and they will take it away from the people like you and me. And only 4 hours ago we see ‘Nvidia plays down Google chip threat concerns’ and it is all about the AI race, which is as I said non-existent, but the energy required to field these hundreds of thousands of GPU’s is and no one is making a table of what is required to fuel these data centers because it is not on ‘their plate’ but the need for energy becomes real and really soon too. We do not have the surplus to take care of this and when places like Texas give us “Electricity demand is also going up, with much of it concentrated in Texas due to “data centers and cryptocurrency mining facilities,”” with the added “Driving the rise in wholesale prices next year is primarily a projected 45% increase at the Electric Reliability Council of Texas-North pricing hub. “Natural gas prices tend to be the biggest determinant of power prices,” the EIA said. “But in 2026, the increase in power prices in ERCOT tends to reflect large hourly spikes in the summer months due to high demand combined with relatively low supply in this region.”” Now this is not true for the whole world, but we see here a “projected 45% increase” and that is for 2026. So where are these data centers, what are their energy surpluses and what is to come? No one is looking at that, but when any data centre is hit with a brownout, or a partial and temporary drop in voltage in an electrical power supply. When that happens any data centre shuts down, energy is adamant for all its GPU’s and their better not we any issue with energy and I saw this a year ago, so why isn’t the media looking into this? I saw one article that that question was not answered and the media just shoved it aside, but as I see it, it should be on the forefront of any media setting. It will happen and the people will suffer, but as I see it (and mentioned) is that the media is whoring for digital dollars and they need their advertisement money from these 4 places and a few more, all ready for advertisement attention and the media plays ball because they want their digital dollars (as I personally see it).
So whilst the NPR article is quite nice, the one element missing is what makes this bubble rear its ugly head, because too many want their coins for their effort and it is what is required. But what does the audience require? And the audience is you an me dear reader. I have set a lot of my requirements to energy falling short, but there is only so much I can do and it is going to be 32 degrees (celsius) today, so what happens when the energy slows down for 5.56 million people in Sydney? Because the Data centers will make a first demand from their energy providers or they will slap a lawsuit worth billions on that energy provider. And we the people (wherever we are) are facing what comes next. Keeping data centers cool and powered whilst we the people boil in our own homes. As such that is the future I am predicting and people think I am wrong, but did they make the calculation of what these data centers require? Are they seeing the energy shortfalls that are impeding these data centers? And the energy providers will take the money and the contracts because it won’t coexist to this, but that is exactly what we are facing in the short run and the investors? Well, I don’t really care about them, they invested and if you aren’t willing to lose it all with a mere card to help you through (card below), you aren’t a real investor, you are merely playing it safe and in that world there are no bubbles.
Remind me, how did that end in 2008? The speculated cost were set to $16 trillion in U.S. household wealth, and this bubble is significantly larger than the 2008 one and this time they are going all in on money, most of them do not have. So that is what is coming and my fears do not matter, but the setting that NPR gives us all with ‘Here’s why concerns about an AI bubble are bigger than ever’ matters and that is what I see coming.
So have a great day and never trust one source, always verify what you read through other sources. That part was shown to be when we all see (from various sources) that “The United States is on track to lose $12.5 billion in international travel spending this year” whilst my calculations made it between 80 and 130 billion and some laughed at my predictions a few months earlier and I get that. I would laugh too when those ‘economics’ state one amount and I come with a number over 700% larger. I get that, but now (apparently) there is an Oxford economics report that gives us “Damning report says U.S. tourism faces $64 billion blow as Trump administration’s trade wars drive away foreign visitors and cut spending”, so I have that to chase down now, but it shows that my numbers were mostly spot on, at least a lot better than whatever those economics are giving you. So never trust merely one source even if they believe to be on the right track. But that is enough about that and consider why some bubble settings are underexposed and when you see that the NPR gave you three additional angles and missed mine (likely not intentional) consider what those investment firms are overseeing (likely intentional) because the setting that they are willing to lose 100% is ludicrous, they have settings for that and as the government bailed them out the last time, they think it will save them this time too.
Have a great day today, I need an ice cream at 4:30 in the morning. I still have some, so yay me.
That is what seems to be happening. The first one was a simple message that Oracle is doom headed according to Wall Street (I don’t agree with that), but it made me take another look and to make it simpler I will look at the articles chronologically.
The first one was the Wall Street Journal (4 days ago), with ‘Oracle Was an AI Darling on Wall Street. Then Reality Set In’ (at https://www.wsj.com/tech/oracle-was-an-ai-darling-on-wall-street-then-reality-set-in-0d173758) with “Shares have lost gains from a September AI-fueled pop, and the company’s debt load is growing” with the added “Investors nervous about the scale of capital that technology companies are plowing into artificial-intelligence infrastructure rattled stocks this week. Oracle has been one of the companies hardest hit” but here is the larger setting. As I see it, these stocks are manipulated by others, whomever they are Hedge funds and their influencers and other parties calling for doom all whilst the setting of the AI bubble are exploiters by unknown gratifiers of self. I know that this sounds ominous and non specific, but there is no way most of us (including people with a much higher degree of economic knowledge than I will ever have) And the stage of bubble endearing is out there (especially in Wall Street) then 14 hours ago we get ‘Oracle (ORCL): Evaluating Valuation After $30B AI Cloud Win and Rising Credit Risk Concerns’ (at https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-evaluating-valuation-after-30b-ai-cloud-win-and/amp) where we see “Recent headlines have only amplified the spotlight on Oracle’s cloud ambitions, but the past few months have been rocky for its share price. After a surge tied to AI-driven optimism, Oracle’s 1-month share price return of -29.9% and a year-to-date gain of 19.7% tell the story: momentum has faded sharply in the near term. However, the 1-year total shareholder return still sits at 4.4% and its five-year total return remains a standout at nearly 269%. This combination of volatility and long-term outperformance reflects a market grappling with Oracle’s rapid strategic shift, balance sheet risks, and execution on new contracts.” I am not debating the numbers, but no one is looking to the technology behind this. As I see it places like Snowflake and Oracle have the best technology for these DML and LLM solutions (OK, there are a few more) and for now, whomever has the best technology will survive the bubble and whomever is betting on that AI bubble going their way needs Oracle at the very least and not in a weakened state, but that is merely my point of view. So last we get the Motley Fool a mere 7 hours ago giving us ‘Billionaire David Tepper Dumped Appaloosa’s Stake in Oracle and Is Piling Into a Sector That Wall Street Thinks Will Outperform’ (at https://www.fool.com/investing/2025/11/23/billionaire-david-tepper-dumped-appaloosas-stake-i/) we see “Billionaire David Tepper’s track record in the stock market is nothing short of remarkable. According to CNBC, the current owner of the Carolina Panthers pro football team launched his hedge fund Appaloosa Management in 1993 and generated annual returns of at least 25% for decades. Today, Tepper still runs Appaloosa, but it is now a family office, where he manages his own wealth.” Now we get the crazy stuff (this usually happens when I speculate) So this gives us a person like David Tepper who might like to exploit Oracle to make it seem more volatile and exploit a shortening of options to make himself (a lot) richer. And when clever people become self managing, they tend to listen to their darker nature. Now I could be all wrong, but when Wall Street is going after one of the most innovative and secure companies on the planet just to satisfy the greed of Wall Street, I get to become a little agitated. So could it all be that Oracle was drawn into the ‘fab’ and lost it? No, they clearly stated that there would be little return until 2028, a decent prognosis and with the proper settings of DML and LLM finding better and profitable ways by 2027 to find revenue making streams is a decent target to have and it is seemingly an achievable one. In the meantime IBM can figure out (evolve) their shallow circuits and start working on their trinary operating system. I have no idea where they are at present, but the idea of this getting ready for a 2040 release is not out of the question. In the meantime Oracle can fill the void for millions of corporations that already have data, warehouses and form settings. Another are plenty of other providers of data systems.
So when we are given “The tech company Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI), thanks to its specialized data centers that contain huge clusters of graphics processing units (GPUs) to train large language models (LLMs) that power AI.
In September, the company reported strong earnings for the first quarter of its fiscal 2026, along with blowout guidance. Remaining performance obligations increased 359% year over year to $455 billion, as it signed data center agreements with major hyperscalers, including OpenAI.”
So whilst we see “Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI)” we need to take a different look at this. Oracle was never a strong beneficiary of AI, it was a strong vendor with data technologies and AI is about data and in all of this, someone is ‘fitting’ Oracle into a stage that everyone just blatantly accepts without asking too many questions (example the Media). With the additional “to train large language models (LLMs) that power AI”, the hidden gem is in the second statement. AI and LLM are not the same, You only partially train real AI, this is different and those ‘magnificent seven’ want you to look away from that. So, when was the last time that you actually read that AI does not yet exist? That is the created bubble and players like Oracle are indifferent to this, unless you spike the game. It has stocks, it has options and someone is turning influencers to their own use of greed. And I object to this, Oracle has proven itself for decades, longer than players like Microsoft and Google. So when we see ‘Buying the sector that Wall Street is bullish on’ we see another hidden setting. The bullishness of Wall Street. Do you think they don’t know that AI is a non-existing setting? So why go after the one technology that will make data work? That setting is centre in all this and I object those who go after Oracle. So when you answer the call of reality consider who is giving you the AI setting and who is giving you the DML/LLM stage of a data solution that can help your company.
Have a great day we are seemingly all on Monday at present.
OK, I am over my anger spat from yesterday (still growling though) and in other news I noticed that Grok (Musk’s baby) cannot truly deal with multidimensional viewpoints, which is good to know. But today I tried to focus on Oracle. You know whatever AI bubble will hit us (and it will) Oracle shouldn’t be as affected as some of the Data vendors who claim that they have the golden AI child in their crib (a good term to use a month before Christmas). I get that some people are ‘sensitive’ to doom speakers we see all over the internet and some will dump whatever they have to ‘secure’ what they have, but the setting of those doom speakers is to align THEIR alleged profit needs to others dumping their future. I do not agree. You see Oracle, Snowflake and a few others offer services and they are captured by others. Snowflake has a data setting that can be used whether AI comes or not, whether people need it or not. And they will be hurt when the firms go ‘belly up’ because it will count as lost revenue. But that is all it is, lost revenue. And yes both will be hurting when the AI bubble comes crashing down on all of us. But the stage that we see is that they will skate off the dust (in one case snow) and that is the larger picture. So I took a look at Oracle and behold on Simple Wall Street we get ‘Oracle (ORCL) Is Down 10.8% After Securing $30 Billion Annual Cloud Deal – Has The Bull Case Changed?’ (At https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-is-down-108-after-securing-30-billion-annual-clo) With these sub-line points:
Oracle recently announced a major cloud services contract worth US$30 billion annually, set to begin generating revenue in fiscal 2028 and nearly tripling the size of its existing cloud infrastructure business.
This deal offers Oracle significantly greater long-term growth visibility and serves as a major endorsement of the company’s aggressive cloud and artificial intelligence strategy, even as investors remain focused on rising debt and credit risks.
We’ll examine how this multi-billion-dollar cloud contract could reshape Oracle’s investment narrative, particularly given its bold AI infrastructure expansion.
So they triple their ‘business’ and they lose 10.8%? It leads to questions. As I personally see it, Wall Street is trying to insulate themselves from the bubble that other (mostly) software vendors bring to the table. And Simply Wall Street gives us “To believe in Oracle as a shareholder right now is to trust in its transformation into a major provider of cloud and AI infrastructure to sustain growth, despite high debt and reliance on major AI customers. The recent announcement of a US$30 billion annual cloud contract brings welcome long-term visibility, but it does not change the near-term risk: heavy capital spending and dependence on sustained AI demand from a small set of large clients remain the central issues for the stock.” And I can get behind that train of thought, although I think that Oracle and a few others are decently protected from that setting. No matter how the non existent AI goes, DML needs data and data needs secure and reliable storage. So in comes Oracle in plenty of these places and they do their job. If 90% business goes boom, they will already have collected on these service terms for that year at least, 3-5 years if they were clever. So no biggy, Collect on 3-5 years is collected revenue, even if that firm goes bust after 30 days, they might get over it (not really).
And then we get two parts “Oracle Health’s next-generation EHR earning ONC Health IT certification stands out. This development showcases Oracle’s commitment to embedding AI into essential enterprise applications, which supports a key catalyst: broadening the addressable market and stickiness of its cloud offerings as adoption grows across sectors, particularly healthcare. In contrast, investors should be aware that the scale of Oracle’s capital commitment brings risks that could magnify if…” OK, I am on board with these settings. I kinda disagree, but then I lack economic degrees and a few people I do know will completely see this part. You see, I personally see “Oracle’s commitment to embedding AI into essential enterprise applications” as a plus all across the board. Even if I do believe that AI doesn’t exist, the data will be coming and when it is ironed out, Oracle was ready from the get go (when they translate their solutions to a trinary setting) and I do get (but personally disagree) with “the scale of Oracle’s capital commitment brings risks that could magnify if”. Yes, there is risk but as I see it Oracle brings a solution that is applicable to this frontier, even if it cannot be used to its full potential at present. So there is a risk, but when these vendors pay 5 years upfront, it becomes instant profit at no use of their clouds. You get a cloud with a population of 15 million, but it is inhabited by 1.5 million. As such they have a decade of resources to spare. I know that things are not that simple and there is more, but what I am trying to say is that there is a level of protection that some have and many will not. Oracle is on the good side of that equation (as is Snowflake, Azure, iCloud, Google Gemini and whatever IBM has, oh, and the chips of nVidia are also decently safe until we know how Huawei is doing.
And the setting we are also given “Oracle’s outlook forecasts $99.5 billion in revenue and $25.3 billion in earnings by 2028. This is based on annual revenue growth of 20.1% and an earnings increase of $12.9 billion from current earnings of $12.4 billion” matters as Oracle is predicting that revenue comes calling in 2028, so anyone trying to dump their stock now is as stupid as they can be. They are telling their shareholders that for now revenue is thimble sized, but after 2028 which is basically 24 months away, the big guns come calling and the revenue pie is being shared with its shareholders. So you do need brass balls to do this and you should not do this with your savings, that is where hedge funds come in, but the view is realistic. The other day I saw Snowflake use DML in the most innovative way (one of their speakers) showed me a new lost and found application and it was groundbreaking. Considering the amounts of lost and found is out there at airports and bus stations, they showed me how a setting of a month was reduced to a 10 minute solution. As I saw it, places like Dubai, London and Abu Dhabi airport could make is beneficial for their 90 million passengers is almost unheard of and I am merely mentioning three of dozens upon dozens of needy customers all over the world. A direct consequence of ‘AI’ particulars (I still think it is DML with LLM) but no matter the label, it is directly applicable to whomever has such a setting and whilst we see the stage of ‘most usage fails in its first instance’ this is not one of them and as such in those places Oracle/Snowflake is a direct win. A simple setting that has groundbreaking impact. So where is the risk there? I know places have risks, but to see this simple application work shows that some are out there showing the good fight on an achievable setting and no IP was trained upon and no class actions are to follow. I call that a clear win.
So, before you sell your stock in Oracle like a little girl, consider what you have bought and consider who wants you to sell, and why, because they are not telling you this for your sake, they have their own sake. I am not telling you to sell anything. I am merely telling you to consider what you bought and what actual risks you are running if you sell before 2029. It is that simple.
Have a great day (yes Americans too, I was angry yesterday), These bastards in Vancouver and Toronto are still enjoying their Saturday.
Like Poodles, I seem to have misplaced my marbles. AKA I lost them completely. Now only 9 hours ago I shouted that I am sick of the AI bubble, but a few minutes ago I got called back into that fray. You see, I was woken up by an image.
This is the image and it gives us ‘Oracle’s $300bn OpenAI deal is now valued at minus $74bn’ there is no way this is happening. You see, I have clearly stated that the bubble is coming. But in this, Oracle has a set state of technologies it is contributing. As such, where is the bubble blowing up in the face of OpenAI and Microsoft? In this, the Financial Times (at https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89) is giving us ‘Oracle is already underwater on its ‘astonishing’ $300bn OpenAI deal’. So where is the damager to the other two? We are given “OK, yes, it’s a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $60bn loss figure is not entirely wrong. Oracle’s “astonishing quarter” really has cost it nearly as much as one General Motors, or two Kraft Heinz. Investor unease stems from Big Red betting a debt-financed data farm on OpenAI, as MainFT reported last week. We’ve nothing much to add to that report other than the below charts showing how much Oracle has, in effect, become OpenAI’s US public market proxy:” There might be some loss on Oracle (if that happens) and later on we were given (after a stack of graphics, see the story for that) “But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September. Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle.” And in this, I still have doubts on the reporting side of things. From my own feelings (not hard core numbers) that Oracle and Amazon are the best players to survive this as their technology is solid. When AI does come, they are likely the only two to set it right and the entire article goes out of its way to mention Microsoft. But in all this Microsoft has made significant investments in OpenAI and has rights to OpenAI’s Intellectual Property (IP). This comes down to Microsoft holding a stake in OpenAI’s for-profit arm, OpenAI Group PBC, valued at approximately $135 billion, which represents about 27% of the company. So how is Microsoft not mentioned?
As such how come Oracle is underwater? Is it testing scuba gear? And if the article is indeed true, what is the value of OpenAI now? Because that will also drown the 27% of it (holding the name Microsoft) and that image is missing from that equation. If this is the bubble bursting, which might be true (a year before I predicted it) then it stands to rights that this is also impacting Amazon, Google, IBM, Microsoft and OpenAI. As such this article seems a little far fetched, a little immature and largely premature by now naming all the players in this game. I personally thought that Oracle would be one of the winners in all of this, or better stated a smallest loser in this multi trillion bubble.
So what gives? And in this I might be incorrect and largely missing the point, but a write-off to the amount of nearly half a trillion dollars has more underwriters and mentioning merely Oracle is a little far fetched, no matter how fashionable they all seem to be and for that matter as Microsoft has been ‘advocating’ their copilot program, how deep are they in? Because the Oracle write-off will be squarely in the face of that Nadella dude. As he seemingly already missed the builder.ai setting, this might be the one ending his career and whatever comes next might want to commit suicide instead of accepting whatever promotion is coming his way. (I know it is a dark setting) but the image is a little disconcerting at present. And the images that the Financial Times give us, like the Hyperscaler capex, show Microsoft to be 3 times in deeper water than Oracle is, so why aren’t they mentioned in the text? And in those same images Amazon are in way over their heads and that is merely the beginning of a bubble going sideways on everyone. As such, is this a storm in a cup of water? If that is so, why is Oracle underwater? And there is ample reason to see me as a non-economist, I never was on wanted to be one. But the media as gives raises questions. And I agree, Oracle is on a long way to break even, but if they do not, neither are Amazon, Microsoft and OpenAi and that part is seemingly missing too. If anything, Larry Ellison could pay the shortcomings with his petty cash (he allegedly has 250,000 million) that is how own die and the others won’t even come near that amount.
So whilst we wait for someone to make sense of this all, we need to walk carefully and not panic, because these settings tend to be the stage where the panicky people sell what they can for dimes to the dollar and that is not how I want to see players like Microsoft jump that shark. This is not any kind of anti-Microsoft deal, it is them calling the others not innovative whilst there isn’t a innovative bone in that cadaver. So whilst we want to call the cards. The only thing I do is calling the cards of the Financial Times and likewise reporting media calling out the missing settings of loss towards Microsoft and OpenAI. It is the best I can do, I know an economic major who could easily do that, but he is busy running Canada at the moment.
Have a great day and I apologize for causing an optional panic, which was not my intention.
That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things.
And example can be seen in
TABLES / v1 v2 v3 v4 v5 BY (LABELS) / count.
And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.
But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.
The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions.
The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.
So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best.
Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved.
As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won.
It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.
Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.
That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified.
To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old.
But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow.
So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.
So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?