That is what we look for and I found another setting in something called Airport technology. You see, we see ‘King Salman International Airport, Saudi Arabia’ (at https://www.airport-technology.com/projects/king-salman-international-airport-saudi-arabia/) and the facts are clear. An airport that covers about 57km², positioning it among the largest airports by footprint and is said to “KSIA is expected to handle up to 120 million travelers by 2030, and up to 185 million passengers and 3.5 million tonnes of cargo by 2050” But I saw more. You see, on the 26th of September I wrote ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) where I saw the presentation of an Near Intelligent Parsing (NIP) thought that could revolutionise lost and found settings in airports, on railway stations and a few other places, the instant winners of this idea would be Dubai International, Abu Dhabi international, London Heathrow and several other places and now also King Salman International Airport (KSIA), I would make some alterations to it all. In stead of entering it all, use PDA’s to records the data as it happens and when it is all entered use what they use in Australian hospitals for wristbands, print that data and attack it to whatever is found. If this is properly done, it will be done in mere minutes and within an hour people can look for the items, they could pick it up on the way back, in some cases it could be delivered to their hotel. This would be customer service of a much higher degree. And as I see it, the five airports (namely King Khalid International Airport, King Abdulaziz International Airport, King Salman International Airport, Dubai International Airport and Zayed International Airport) could become the frontrunner to make an Near Intelligent Parsing (NIP) solution (not calling a solution based on DML/LLM AI) that could be the next solution for airports al over the world and there is some personal gratification to see America talk about how great their AI solutions are, whilst the little guy in Australia found a solution and hands it over to either Saudi Arabia or the UAE. A solution that was out there in the open and players like Microsoft (Google and Amazon too) merely left it laying on the floor and the elements were clearly there, so I hand it over to these two hungry places with the need to see what it can offer for them and in this it isn’t mine. It was presented by Roger Garcia (from Interworks) and the printing setting is already out there. Merely the joining of two solutions and they are done. So as I see it, another folly for Microsoft (honestly Google and Amazon too). This setting could have been seen by a larger number of players and they all seemingly fell asleep on the job. But if I know what Saudi’s and Emirati’s do when they see something that will work for them. They get really active. And so they should.
And consider that these airports will cater to close to half a billion travelers annually, and as such they will need a much better solution than whatever they at present have and there is the setting for Interworks. And when these solutions set the station towards delivering what was lost, the quality scores will go skywards and that is the second setting where the west is bottoming out. One presentation set the option from grind to red carpet walking. A setting overlooked by those captains of industry.
Good work guys!
So whilst I start preparing for the next IP thought I am having there is still some space to counter the US and its flaming EU critique. Let us remind America that the EU was the collection of ideas from America retail who were tired of dealing with all those currencies and in the late 80’s AMERICANS decided to sell the Euro to Europeans, all because they couldn’t sort out their currency software (or currency logistics) and now that it starts working against them they cry like little girls. Go cry me a river. In the meantime I will put ideas worth multiple millions online and let it fly for the revenue hungry salespeople (and consultants). In this case it wasn’t my idea, I merely adjusted an idea from Interworks and slapped some IP (owned by others) to make a more robust solution. I merely hope to positively charge my karma for when it matters.
Have a great day, except Vancouver, they are still somewhere yesterday.
That is a setting I never really contemplated, but the Guardian did and they did a terrific job, they even had a reference to the 49’ers, which will make Jeremy Renner happy. The article ‘The question isn’t whether the AI bubble will burst – but what the fallout will be’ by Eduardo Porter (at https://www.theguardian.com/technology/2025/dec/01/ai-bubble-us-economy) hands us a few sides, a few I never considered as I was looking at the techno stuff, but here we see: “300,000 people flocked there from 1848 to 1855, from as far away as the Ottoman Empire. Prospectors massacred Indigenous people to take the gold from their lands in the Sierra Nevada mountains. And they boosted the economies of nearby states and faraway countries from whence they bought their supplies.”
Which gives root to the expression 49’er and it continues giving us “Gold provided the motivation for California – a former Mexican territory then controlled by the US military – to become a state with laws of its own. And yet, few “49ers” as prospectors were known, struck it rich. It was the merchants selling prospectors food and shovels who made the money. One, a Bavarian immigrant named Levi Strauss who sold denim overalls to the gold bugs passing through San Francisco, may be the most remembered figure of his day.”
And then we get the first sliver “How else to explain Nvidia’s stock price, which more than doubled from April to November, based entirely on the expectation, nay hope, that AI will produce a super-intelligence that can do everything humans do but better. Nvidia – like Levi Strauss back in the day – is at least selling something: computer chips. The valuations of many of the other AI plays – like Open AI or Anthropic – are based largely on the dream.”
But there is a missing cog, this technology needs dat storage and that is where I saw the failing of others and the failings of those overlooking data technologies. Oracle is intrinsically connected to that, Azure needs it, Snowflake prefers it and pretty much every data vendor is connecting to Oracle to get it all done in the background, and that is the sliver. Oracle is intrinsically connected to it all and it is the tamer of the data beast or better stated the data demon. As Oracle brings out tools and optionally data settings within their AI storage settings to handle validation and verification, all others will need to adhere better and deeper to the Oracle foundation to even survive. Pretty much all the sources that see the dangers of what some call AI and is clearly nothing better than a DML/LLM engine will see that these two elements are essential to get the LLM engine to do anything that matters and that is where the bonus of Oracle currently resides (as I presumptuously see it) To show this, I will take you back to 1984
User comments
See here, this is what chess computer’s looked like. You press the chess piece you want to move and you push the square where it lands. That is the foundation of the chess computer. In the ‘underground’ of that chessboard are (figuratively speaking) two chips. One had the knowledge of chess, the second chip (mainly memory) has every chess match known to mankind (basically all games all grandmasters have ever played), the program sees what moves are made and that setting is translated to a ‘position base’ and it will look at all the matches who it can foresee what moves are coming. This is great for the player, as it now needs to make an illogical move to throw over the thinking of the computer and make it their bitch. This was pretty much the fist stage of Machine Learning and as todays computers are more clever, there resolution is no way better, It can only set foundation of what it learned, that is the simplicity of knowing that AI doesn’t yet exist.
So back to the story “As I pointed out in my last column about AI, Gita Gopinath, former chief economist of the International Monetary Fund, calculated that a stock market crash equivalent to that which ended the dot-com boom would erase some $20tn in American household wealth and another $15tn abroad, enough to strangle consumer spending and induce a recession.” And I have no way of knowing that setting, but as I see it, like Levi Strauss and the makers of bubbles (like in image one) someone has to supply the soap water and more important the jeans to not put once ass out to frolic and in that second setting Oracle comes in and even as I see the ‘panic drivers, saying that Oracle is dangerous’, there is another setting. Whatever comes out of this, whatever survives, most only survives on Oracle solutions. And that is what is left unspoken. Should Oracle add the Validation and Verification tables, they will be the only one raking in the gold when True AI comes, because it is not merely the missing part I discussed earlier, someone needs to set the record straight on what is optionally to be trusted and that is where Oracle sets the mark.
Which leads to “AI could produce a similar landscape. A critical determinant is how much debt is at stake. It wouldn’t be such a problem if the bubble were financed largely from the cash pile of Alphabet and Amazon, Microsoft and Facebook. They might lose their shirt, but who cares. The worrying bit is that it seems they are increasingly relying on borrowing, which means the prospect of a bursting bubble would again put the financial system at risk.” These systems are using the data as currency, as I see it, Oracle is putting its technology up for usage and that is a pretty safe way to do this. This is whyI have faith in Oracle, that is why I see Oracle as the one surviving the goldfish like a champion, because they are doing what Levi Strauss did. These data vendors are relying on data to clothe them, but if that data is not properly managed, they end up having nothing. Yes, Microsoft will survive, but at a level that is likely 2 trillion lower than it is now. And that is mainly because it wanted to be on top of things and they got (I think it was) 24% of OpenAI, but as that bursts, Sam Altman will have even less than I have now (and I am ridiculously poor) and that cargo train of debt will hit Microsoft square in the face, Oracle will get some damage, but not nearly as much and the world will need their data solutions. Why do you think everyone wants to connect to Oracle? It is the Rolls Royce of data collecting and data storage. And that is perhaps the only issue with that article, there is zero mention of Oracle.
So as we get “Big Tech has raised nearly $250bn in debt so far this year, according to Bloomberg, a record. Analysts at Morgan Stanley suggest that debt will be needed to fill a $1.5tn funding gap to ramp up spending on data centers and hardware. Problematically, it is getting hard to follow the money, as Nvidia, Open AI and others in the ecosystem buy into each other, clouding who, in the end, will be left holding the bag.” And there is one think wrong with this. Stargate is said to be $500bn, so there is a gap in all this and I reckon that the damage will be significantly worse, that is beside the small non mentioned fact that America at present has 5,427 data centers, how many of them and to what degree are they all set to ‘their version of AI’? So what is set in what some call Blue Owl solutions (like Meta) and what happens when those solutions ‘bubble out’ (collapse might be a better phrase) so when that happens, how much damage will that bring, because as I see it (not wearing glasses) the $1.5tn funding gap won’t even be close what is required. But that is just me speculating, so feel free to (I insist) that you get your own verifiable numbers. I reckon that between now and 2029 the return of a backlogged $4 trillion return on investment is required. So taking “a banks perspective”, an inaccurately amount of $292,500,000,000 in revenue needs to be shown for that bubble not to come and that is out of the question, but the setting that Eduardo Porter gives us, is what comes next and he gives it to us as “the Superhuman – can only come about by dropping LLMs – which are essentially massive correlation engines – and switching to something else called a world model architecture, where machines develop a “mental” model of the outside world.” It is a nice sentiment, but I do not completely agree with that. Correlation engines have their use and there is use in a DML/LLM setting, but identify it as such, not claim ‘AI does it’. Because it won’t and it can’t, but there are options in Oracle to upgrade the data you have and that is instrumental in surviving this bubble burst. And I have seen the folly in several places and that might set a better station down the road, because when true AI cones, it still needs data and if that data was managed, validated and verified in Oracle (preferably), half the war of that solution bringer is solved.
So I need a different hobby, slapping Microsoft and AI evangelists is nice, almost a public service but I need a new idea for gaming IP, because that makes me happy and I like feeling happy. So whilst some think that “Nvidia, Open AI and others in the ecosystem buy into each other” is the hard core evil stuff (and it might be) there is a setting it reminds me of, it was in the 90’s and these ‘consultants’ were all into the need of funny money in the form of assignments, the issue was that when they had to show results they immediately took another job and took their ‘knowhow’ to greener shores and all the time this happened the shores were all becoming less and less green. This has the flair of that setting and to some degree the feel.
I might be wrong on that last part, but that is what I feel on this, especially as the big players are buying into each others solution and handing each other pieces of paper that in the end has as much value as a roll off toilet paper.
It might not be eloquently phrased, but there is a time for that and this is not it, as speculated shit is about to hit the walls and if you are lucky it happens after Christmas (that is almost certain) but in the end, the invoice is due and that is where the CFO’s will show that as they embraced the Blue Owl solution, their company is saved. I would depend on and side with whatever Oracle has, it is not based on facts, it is a feeling and that feeling is strong at present. And in support I see (9 minutes ago) ‘Ooredoo Qatar announces strategic partnership with Oracle to deploy Oracle Alloy sovereign cloud and AI platform’, they didn’t go towards Microsoft, AWS of a few other settings, they trust Oracle and that is what plenty of others need to do.
Have a great day, I am now 8 hours from midweek, not a bad deal for me today and as the sun is shining brightly, I might hide in a winterly Hogsmeade whilst playing Hogwarts Legacy. Gaming is not a bad hobby to have in this case. Because the bubble is out of my control and I am happy to watch it all explode a day later (of whenever that is), most of the garnish news has been drowned out by real news at that point.
That is what I saw two days ago when the BBC gave us (at https://www.bbc.com/news/articles/cq8dq47j5y8o) ‘South Africa hits back after Trump says US won’t invite it for G20 next year’ the article gives us the setting “South Africa’s President Cyril Ramaphosa has described as “regrettable” the announcement by US President Donald Trump that South Africa would not be invited to take part in next year’s G20 summit in Florida. In a social media post, Trump said South Africa had refused to hand over the G20 presidency to a US embassy representative at last week’s summit in Johannesburg.” As well as “Ramaphosa said in a statement that the US had been expected to participate in the G20 meetings, “but unfortunately, it elected not to attend the G20 Leaders Summit in Johannesburg out of its own volition”. He however noted that some US businesses and civil society entities were present. He said that since the US delegation was not there, “instruments of the G20 Presidency were duly handed over to a US Embassy official at the Headquarters of South Africa’s Department of International Relations and Cooperation”.” There is as I personally see as I see it a second reason. Is the reason perhaps that America is in such a disastrous financial situation that he felt compelled to evade the G20? He can approach the entire setting to the press with ‘Quiet piggy’ settings, but the 15 strongest economies can not be answered in that same manners. There he has to answer and his department of War and the house of missing coins can’t shield him from that. This year Canada took home the beef, the champagne and the bacon. Next year? That is something he is unwilling to face at present. He needs to be reinsured that all the trillions that are changing between hands over 7 companies will do him good and at present the setting of Stargate is currently set at a economic windfall of minus 500 billion and that was not what he advertised a year ago and it is merely one of several failures. And at present these 7 big bloated companies are at best bringing in 3% of what is required (an inaccurate presumption) but that setting is what he is looking at and at present there is no upside to the numbers of 2027 and 2028.
The image above was shown in LinkedIn, I never thought of it this way, where we see “The entire U.S. economy right now is seven companies sending one trillion back and forth to each other” that is how it could be seen (credit of image unknown) but is that GDP revenue? I reckon that some might validly disagree and that is before you consider what OpenAI is costing America and Microsoft (at 3% revenue it isn’t really an asset is it?)
And beyond that tourism is falling flat, and America is representing itself to be nothing more than a third world country, the president of the United States is likely to be marginally better than South Africa or Argentina, making it 17th place at best. The GDP setting in December 2024 (which was 29185) will be seen as a jolly time, by next year America is likely (a clear speculation) to be less than 13913 making it a little more fortunate than India which manages this at 5 times the population. Would you gathers in that crowd after you proclaimed year after year that America was doing so well? The defense industry is losing revenue, tourism is down massively and that Oxford Economics report stating that it is costing America $50 billion, which is 400% worse than the numbers we see thrown in the media. Then jobs are down and as I see it retail is massively down. in addition we see Aluminum smelters are down, only 4 in 24 are operating. They cannot deal with the unsustainable operating cost and that list goes on. So what happens when soda cans become an issue? American dream states are set to operate a soda can, opening it and drinking it (in the Miami sun), so I reckon that 2026 will bring its own entertainment to behold and at present , I reckon that President Trump is merely showing up to do some photo moments, so who will be ‘advocating’ how well America is doing?
I reckon it sucks to be the the man in charge at the Federal Reserve. And only 8 hours we were given “Federal Reserve has managed to push up bank reserves for 4 weeks now, but they’re running out of tools in the toolbox and will soon have to resume asset purchases, euphemistically called “QE” for quantitative easing, i.e., money printing:” (source: E.J. Antoni, Ph.D.) so as we accept that Jerome Powell is (for now) the Chair of the Federal Reserve of the United States. I cannot recall that America has given any voice to the effects (or benefits) of Quantitive Easing. So is it real? What is Jerome Powell up to? It is a fair question as President Trump doesn’t really understand economics, optionally even less than me. As I see it, he filed for bankruptcy 6 times, the last time was due to the 2008 mess, so if people argue 5 times I would accept that. As I see it, he needed to make Jerome Powell his best friend and seek his assistance in avoiding the setting America is facing these days. And my smirking sense of humor (an evil one) is wondering if America can even afford hosting the 2026 G20 summit. As I see it (and I might definitely be wrong) is that America is using South Africa to get the 2026 setting taken away from them. As I see it, Canada or the EU is a much better place in 2026. There might be a reason to hope for Canada, as he will see it as a reason to make the speculative statement that he is leaving the G20 to his 51st state (making Canadians angry to say the least).
But as I see it, I actually don’t know. And I reckon that most DML systems cannot either as this setting has never taken place before, the American economy is in an mess and not a good one.
This is what you call the perfect setting to be hosting the G20 in 2026, apparently in Miami, so order your sodas in advance.
Is there more bad news, is countered by me with ‘Does there need to be?’ A setting that is voiced by many. As I see it, the GDP in 2023 The gross domestic product (GDP) for the Los Angeles metro area was approximately $1.30 trillion in 2023, now we know that Los Angeles had dreadful fires, but the current situation isn’t helping and what will California report in revenue for 2024 and 2025? We will know some of these numbers in December, giving a lot more visibility to the hardship America is facing and there is no hiding from those numbers (playing them will be worse). America is stopping to be a great place to be and as I see it, there aren’t too many countries lining up to be their friend at present. Trump squashed that route of healing too.
That is what seems to be happening. The first one was a simple message that Oracle is doom headed according to Wall Street (I don’t agree with that), but it made me take another look and to make it simpler I will look at the articles chronologically.
The first one was the Wall Street Journal (4 days ago), with ‘Oracle Was an AI Darling on Wall Street. Then Reality Set In’ (at https://www.wsj.com/tech/oracle-was-an-ai-darling-on-wall-street-then-reality-set-in-0d173758) with “Shares have lost gains from a September AI-fueled pop, and the company’s debt load is growing” with the added “Investors nervous about the scale of capital that technology companies are plowing into artificial-intelligence infrastructure rattled stocks this week. Oracle has been one of the companies hardest hit” but here is the larger setting. As I see it, these stocks are manipulated by others, whomever they are Hedge funds and their influencers and other parties calling for doom all whilst the setting of the AI bubble are exploiters by unknown gratifiers of self. I know that this sounds ominous and non specific, but there is no way most of us (including people with a much higher degree of economic knowledge than I will ever have) And the stage of bubble endearing is out there (especially in Wall Street) then 14 hours ago we get ‘Oracle (ORCL): Evaluating Valuation After $30B AI Cloud Win and Rising Credit Risk Concerns’ (at https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-evaluating-valuation-after-30b-ai-cloud-win-and/amp) where we see “Recent headlines have only amplified the spotlight on Oracle’s cloud ambitions, but the past few months have been rocky for its share price. After a surge tied to AI-driven optimism, Oracle’s 1-month share price return of -29.9% and a year-to-date gain of 19.7% tell the story: momentum has faded sharply in the near term. However, the 1-year total shareholder return still sits at 4.4% and its five-year total return remains a standout at nearly 269%. This combination of volatility and long-term outperformance reflects a market grappling with Oracle’s rapid strategic shift, balance sheet risks, and execution on new contracts.” I am not debating the numbers, but no one is looking to the technology behind this. As I see it places like Snowflake and Oracle have the best technology for these DML and LLM solutions (OK, there are a few more) and for now, whomever has the best technology will survive the bubble and whomever is betting on that AI bubble going their way needs Oracle at the very least and not in a weakened state, but that is merely my point of view. So last we get the Motley Fool a mere 7 hours ago giving us ‘Billionaire David Tepper Dumped Appaloosa’s Stake in Oracle and Is Piling Into a Sector That Wall Street Thinks Will Outperform’ (at https://www.fool.com/investing/2025/11/23/billionaire-david-tepper-dumped-appaloosas-stake-i/) we see “Billionaire David Tepper’s track record in the stock market is nothing short of remarkable. According to CNBC, the current owner of the Carolina Panthers pro football team launched his hedge fund Appaloosa Management in 1993 and generated annual returns of at least 25% for decades. Today, Tepper still runs Appaloosa, but it is now a family office, where he manages his own wealth.” Now we get the crazy stuff (this usually happens when I speculate) So this gives us a person like David Tepper who might like to exploit Oracle to make it seem more volatile and exploit a shortening of options to make himself (a lot) richer. And when clever people become self managing, they tend to listen to their darker nature. Now I could be all wrong, but when Wall Street is going after one of the most innovative and secure companies on the planet just to satisfy the greed of Wall Street, I get to become a little agitated. So could it all be that Oracle was drawn into the ‘fab’ and lost it? No, they clearly stated that there would be little return until 2028, a decent prognosis and with the proper settings of DML and LLM finding better and profitable ways by 2027 to find revenue making streams is a decent target to have and it is seemingly an achievable one. In the meantime IBM can figure out (evolve) their shallow circuits and start working on their trinary operating system. I have no idea where they are at present, but the idea of this getting ready for a 2040 release is not out of the question. In the meantime Oracle can fill the void for millions of corporations that already have data, warehouses and form settings. Another are plenty of other providers of data systems.
So when we are given “The tech company Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI), thanks to its specialized data centers that contain huge clusters of graphics processing units (GPUs) to train large language models (LLMs) that power AI.
In September, the company reported strong earnings for the first quarter of its fiscal 2026, along with blowout guidance. Remaining performance obligations increased 359% year over year to $455 billion, as it signed data center agreements with major hyperscalers, including OpenAI.”
So whilst we see “Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI)” we need to take a different look at this. Oracle was never a strong beneficiary of AI, it was a strong vendor with data technologies and AI is about data and in all of this, someone is ‘fitting’ Oracle into a stage that everyone just blatantly accepts without asking too many questions (example the Media). With the additional “to train large language models (LLMs) that power AI”, the hidden gem is in the second statement. AI and LLM are not the same, You only partially train real AI, this is different and those ‘magnificent seven’ want you to look away from that. So, when was the last time that you actually read that AI does not yet exist? That is the created bubble and players like Oracle are indifferent to this, unless you spike the game. It has stocks, it has options and someone is turning influencers to their own use of greed. And I object to this, Oracle has proven itself for decades, longer than players like Microsoft and Google. So when we see ‘Buying the sector that Wall Street is bullish on’ we see another hidden setting. The bullishness of Wall Street. Do you think they don’t know that AI is a non-existing setting? So why go after the one technology that will make data work? That setting is centre in all this and I object those who go after Oracle. So when you answer the call of reality consider who is giving you the AI setting and who is giving you the DML/LLM stage of a data solution that can help your company.
Have a great day we are seemingly all on Monday at present.
OK, I am over my anger spat from yesterday (still growling though) and in other news I noticed that Grok (Musk’s baby) cannot truly deal with multidimensional viewpoints, which is good to know. But today I tried to focus on Oracle. You know whatever AI bubble will hit us (and it will) Oracle shouldn’t be as affected as some of the Data vendors who claim that they have the golden AI child in their crib (a good term to use a month before Christmas). I get that some people are ‘sensitive’ to doom speakers we see all over the internet and some will dump whatever they have to ‘secure’ what they have, but the setting of those doom speakers is to align THEIR alleged profit needs to others dumping their future. I do not agree. You see Oracle, Snowflake and a few others offer services and they are captured by others. Snowflake has a data setting that can be used whether AI comes or not, whether people need it or not. And they will be hurt when the firms go ‘belly up’ because it will count as lost revenue. But that is all it is, lost revenue. And yes both will be hurting when the AI bubble comes crashing down on all of us. But the stage that we see is that they will skate off the dust (in one case snow) and that is the larger picture. So I took a look at Oracle and behold on Simple Wall Street we get ‘Oracle (ORCL) Is Down 10.8% After Securing $30 Billion Annual Cloud Deal – Has The Bull Case Changed?’ (At https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-is-down-108-after-securing-30-billion-annual-clo) With these sub-line points:
Oracle recently announced a major cloud services contract worth US$30 billion annually, set to begin generating revenue in fiscal 2028 and nearly tripling the size of its existing cloud infrastructure business.
This deal offers Oracle significantly greater long-term growth visibility and serves as a major endorsement of the company’s aggressive cloud and artificial intelligence strategy, even as investors remain focused on rising debt and credit risks.
We’ll examine how this multi-billion-dollar cloud contract could reshape Oracle’s investment narrative, particularly given its bold AI infrastructure expansion.
So they triple their ‘business’ and they lose 10.8%? It leads to questions. As I personally see it, Wall Street is trying to insulate themselves from the bubble that other (mostly) software vendors bring to the table. And Simply Wall Street gives us “To believe in Oracle as a shareholder right now is to trust in its transformation into a major provider of cloud and AI infrastructure to sustain growth, despite high debt and reliance on major AI customers. The recent announcement of a US$30 billion annual cloud contract brings welcome long-term visibility, but it does not change the near-term risk: heavy capital spending and dependence on sustained AI demand from a small set of large clients remain the central issues for the stock.” And I can get behind that train of thought, although I think that Oracle and a few others are decently protected from that setting. No matter how the non existent AI goes, DML needs data and data needs secure and reliable storage. So in comes Oracle in plenty of these places and they do their job. If 90% business goes boom, they will already have collected on these service terms for that year at least, 3-5 years if they were clever. So no biggy, Collect on 3-5 years is collected revenue, even if that firm goes bust after 30 days, they might get over it (not really).
And then we get two parts “Oracle Health’s next-generation EHR earning ONC Health IT certification stands out. This development showcases Oracle’s commitment to embedding AI into essential enterprise applications, which supports a key catalyst: broadening the addressable market and stickiness of its cloud offerings as adoption grows across sectors, particularly healthcare. In contrast, investors should be aware that the scale of Oracle’s capital commitment brings risks that could magnify if…” OK, I am on board with these settings. I kinda disagree, but then I lack economic degrees and a few people I do know will completely see this part. You see, I personally see “Oracle’s commitment to embedding AI into essential enterprise applications” as a plus all across the board. Even if I do believe that AI doesn’t exist, the data will be coming and when it is ironed out, Oracle was ready from the get go (when they translate their solutions to a trinary setting) and I do get (but personally disagree) with “the scale of Oracle’s capital commitment brings risks that could magnify if”. Yes, there is risk but as I see it Oracle brings a solution that is applicable to this frontier, even if it cannot be used to its full potential at present. So there is a risk, but when these vendors pay 5 years upfront, it becomes instant profit at no use of their clouds. You get a cloud with a population of 15 million, but it is inhabited by 1.5 million. As such they have a decade of resources to spare. I know that things are not that simple and there is more, but what I am trying to say is that there is a level of protection that some have and many will not. Oracle is on the good side of that equation (as is Snowflake, Azure, iCloud, Google Gemini and whatever IBM has, oh, and the chips of nVidia are also decently safe until we know how Huawei is doing.
And the setting we are also given “Oracle’s outlook forecasts $99.5 billion in revenue and $25.3 billion in earnings by 2028. This is based on annual revenue growth of 20.1% and an earnings increase of $12.9 billion from current earnings of $12.4 billion” matters as Oracle is predicting that revenue comes calling in 2028, so anyone trying to dump their stock now is as stupid as they can be. They are telling their shareholders that for now revenue is thimble sized, but after 2028 which is basically 24 months away, the big guns come calling and the revenue pie is being shared with its shareholders. So you do need brass balls to do this and you should not do this with your savings, that is where hedge funds come in, but the view is realistic. The other day I saw Snowflake use DML in the most innovative way (one of their speakers) showed me a new lost and found application and it was groundbreaking. Considering the amounts of lost and found is out there at airports and bus stations, they showed me how a setting of a month was reduced to a 10 minute solution. As I saw it, places like Dubai, London and Abu Dhabi airport could make is beneficial for their 90 million passengers is almost unheard of and I am merely mentioning three of dozens upon dozens of needy customers all over the world. A direct consequence of ‘AI’ particulars (I still think it is DML with LLM) but no matter the label, it is directly applicable to whomever has such a setting and whilst we see the stage of ‘most usage fails in its first instance’ this is not one of them and as such in those places Oracle/Snowflake is a direct win. A simple setting that has groundbreaking impact. So where is the risk there? I know places have risks, but to see this simple application work shows that some are out there showing the good fight on an achievable setting and no IP was trained upon and no class actions are to follow. I call that a clear win.
So, before you sell your stock in Oracle like a little girl, consider what you have bought and consider who wants you to sell, and why, because they are not telling you this for your sake, they have their own sake. I am not telling you to sell anything. I am merely telling you to consider what you bought and what actual risks you are running if you sell before 2029. It is that simple.
Have a great day (yes Americans too, I was angry yesterday), These bastards in Vancouver and Toronto are still enjoying their Saturday.
That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things.
And example can be seen in
TABLES / v1 v2 v3 v4 v5 BY (LABELS) / count.
And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.
But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.
The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions.
The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.
So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best.
Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved.
As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won.
It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.
Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.
The is where I am, lost in thoughts. Drawn between my personal conviction that the AI bubble is real and the set fake thoughts on LinkedIn and Youtube making ‘their’ case on the AI bubble. One is set on thoughts of doubts considering the technology we are currently at, the other thoughts are all fake perceptions by influencers trying to gain a following. So how can any one get any thought straight? Yet in all these there are several people in doubt on their own set (justified) fringes. One of them is ABC who gives us ‘US risks AI debt bubble as China faces its ‘arithmetic problem’, leading analysts warn’ (at https://www.abc.net.au/news/2025-11-11/marc-sumerlin-federal-reserve-michael-pettis-china/105992570) So in the first setting, what is the US doing with the AI debt? Didn’t they learn their lesson in 2008? In the first setting we get “Mr Sumerlin says he is increasingly worried about a slowing economy and a debt bubble in the artificial intelligence sector.” That is fair (to a certain degree) a US Federal Reserve chair contender has the economic settings, but as I look back to 2008, that game put hundreds of thousands on the brink of desperation and now it isn’t a boom of CDO’s and stocks. Now it is a dozen firms who will demand an umbrella from that same Federal Reserve to stay in business. And Mr. Sumerlin gives us “He is increasingly concerned about a slowdown in the US economy, which is why he thinks the Fed needs to cut interest rates again in December and perhaps a couple more times next year.” I cannot comment on that, but it sounds fair (I lack economic degrees) and outside of this AI bubble setting we are given “US President Donald Trump has recently posted on his social media account about giving all Americans not on high incomes, a $US2,000 tariff “dividend” — an idea which Mr Sumerlin, a one-time economic adviser to former US president George W Bush, said could stoke inflation.” I get it, but it sounds unfair, the idea that an AI bubble is forming is real, the setting that people get a dividend that could stoke inflation might be real (they didn’t get the money yet) but they are unrelated inflation settings and they could give a much larger rise to the dangers of the AI bubble but that doesn’t make it so. The bubble is already real because technology is warped and the class cases we will see coming in 2026 is base on ‘allegedly fraudulent’ sales towards the AI setting and if you wonder what happens, is that these firms buying into that AI solution will cry havoc (no return on AI investment) when that happens and it will happen, of that I have very little doubt.
So then we get to the second setting and that is the clam that ‘China has an arithmetic problem’, I am at a loss as to what they mean and the ABC explanation is “But if you have a GDP growth target, and you can’t get consumption to grow more quickly, you can’t allow investment to grow more slowly because together they add up to growth. They’re over-invested almost across the board, so policy consists of trying to find out which sectors are least likely to be harmed by additional over-investment.”
Professor Pettis said that, to curry favour with the central government, local governments had skewed over-investment into areas such as solar panels, batteries, electric vehicles and other industries deemed a priority by Beijing.” This kinda makes sense to me, but as I see it, that is an economic setting, not an AI setting. What I think is happening that both USA and China have their own bubble settings and these bubbles will collide in the most unfortunate ways possible.
But there is also a hindsight. As I see it Huawei is chasing their own AI dream in a novel way that relies on a mere fraction of what the west needs and as I see it, they will be coming up short soon, a setting that Huawei is not facing at present and as I see it, they will be rolling out their centers in multiple ways when the western settings will be running out of juice (as the expression goes).
Is this going to happen? I think so, but it depends on a number of settings that have not played out yet, so the fear is partially too soon and based on too little information. But on the side I have been powering my brain to another setting. As time goes I have ben thinking through the third Dr. Strange movie and here I had the novel idea which could give us a nice setting where the strain is between too rigid and too flexible and it is a (sort of) stage between Dr. Strange (Benedict Cumberbatch) and Baron Mordo (Chiwetel Ejiofor) the idea was to set the given stage of being too rigid (Mordo) against overly flexible (Strange) and in-between are the settings of Mordo’s African village and as Mordo is protecting them we see the optional settings that Kraven (Aaron Taylor-Johnson) get involved and that gets Dr. Strange in the mix. The nice setting is that neither is evil, they tend to fight evil and it is the label that gets seen. Anyway that was a setting I went through this morning.
You might wonder why I mentioned this. You see, Bubbles are just as much labels as anything and it becomes a bubble when asset prices surge rapidly, far exceeding their intrinsic value, often fueled by speculation and investor orgasms. This is followed by a sharp and sudden market crash, or “burst,” when prices collapse, leading to significant rather weighty losses for investors. And they will then cry like little girls over the losses in their wallets. But that too is a label. Just like an IT bubble, the players tend to be rigid and whole focussed on their profits and they tend to go with the ‘roll with it’ philosophy and that is where the AI is at present, they don’t care that the technology isn’t ready yet and they do not care about DML and LLM and they want to program around the AI negativity, but that negativity could be averted in larger streams when proper DML information if given to the customers and they dug their own graves here as the customer demands AI, they might not know what it is (but they want it) and they learned in Comic Books what AI was, and they embrace that. Not the reality given by Alan Turing, but what Marvel fed them through Brainiac. And there is a overlap of what is perceived and what is real and that is what will fuel the AI bubble towards implosion (a massive one) and I personally reckon that 2026 will fuel it through the class actions and the beginning is already here. As the Conversation hands us “Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson.” Which we get from ‘An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?’ (At https://theconversation.com/an-ai-startup-has-agreed-to-a-2-2-billion-copyright-settlement-but-will-australian-writers-benefit-264771) less then 6 weeks ago. And the entire AI setting has a few more class actions coming their way. So before you judge me on being crazy (which might be fair too) the news is already out there, the question is what lobbyists are quieting down the noise because that is noise according to their elected voters. You might wonder how one affect the other. Well, that is a fair question, but it hold water, as these so called AI (I call them Near Intelligent Parses, or NIP) require training materials and when the materials are thrown out of the stage, there is no learning and no half baked AI will holds its own water and that is what is coming.
A simple setting that could be seen by anyone who saw the technology to the degree it had to. Have a great day this mid week day.
That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified.
To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old.
But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow.
So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.
So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?
I was having a ball this morning. I was alerted to an article that was published 11 hours ago, that makes all the difference and in particular the setting of me telling all others “Told you so” So as we start seeing the crumbling reality of a bubble coming to pass, I get to laugh at the people calling me stupid. You see, Ted’s Hardware is giving us )at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-ceo-says-the-company-doesnt-have-enough-electricity-to-install-all-the-ai-gpus-in-its-inventory-you-may-actually-have-a-bunch-of-chips-sitting-in-inventory-that-i-cant-plug-in) with ‘Microsoft CEO says the company doesn’t have enough electricity to install all the AI GPUs in its inventory’ so there I was (with a few critical minds) telling you all that there isn’t enough energy to fuel this setting of these data centers (like StarGate) and now Microsoft (as I personally see it, king of the losers) is confirming this setting. So do you think this (for now) multi trillion dollar company cannot pay his energy bill, or are they scraping the bottom of the energy well. And when we come to think of that, when the globally placed 200,000 people (not just Microsoft) are laid off and there is no energy to fuel their (alleged) AI drive, how far behind is the recession that ends all recessions in America? It might not be the great depression, as that gave them nearly 15 million Americans or 25% of that workforce unemployed. But the trickle effect are a lot bigger now and when that much goes overboard, the American social security will take a massive beating.
So as I have been stating this lack of energy for months (at least months) we are given “Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.” Oh, didn’t I say so a few times? Oh, yes. On January 31st 2024 I wrote “When the UAE engages with that solution, America will come up short in funds and energy. So the ‘suddenly’ setting wasn’t there. This has been out in the open for up to 4 years. And that picture goes from bad to worse soon enough.” I did so in ‘Forbes Foreboding Forecast’ which I did (at https://lawlordtobe.com/2024/01/31/forbes-foreboding-forecast/) so there is a record and the setting of energy shortage was visible over a year ago, I even published a few articles how Elon Musk (he has the IP) to get into that field in a few ways. You see, either you contribute directly, or you remove the overhead of energy, which Elon Musk was in a perfect stage to do.
So, when your chickens come home to roost and such agrarian settings, it becomes a party and a half.
And then we get the BS (that stuff that makes grass grow in Texas) setting that follows with ““I think the cycles of demand and supply in this particular case, you can’t really predict, right? The point is: what’s the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it’s power — it’s sort of the ability to get the builds done fast enough close to power,” Satya said in the podcast. “So, if you can’t do that, you may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today. It’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”” It is utter BS (in my personal view) as I predicted this setting over 639 days ago and I am certain that I am not that much more intelligent than that guy who controls Microsoft (aka Satya Nadella) and that is the short and sweet of it. I might be elevated in dopamines at present, but to see Satya admit to the setting I proclaimed for some time gives a rather large rise to the upcoming StarGate settings and the rather large need to give energy to that setting. It is about to become a whole new ballgame.
And as the Cookie crumbles the tech firms and the Media will all point at each others but as I see it, both were not doing they jobs. I am willing to throw this on the pile of shortcomings that courtesans have as the cater to digital dollars, but that song has been played a few times over. And I am slightly too tired (and too energised) to entertain that song. I want to play something new and perhaps a new Gaming IP might solve that for me today (likely tomorrow).
A setting we are given and as we see the admission on Ted’s Hardware, Some might actually investigate how much energy they are about to come short on. But don’t fret, these tech companies will happily take the energy due to consumers as they can afford the new prices with are likely to be over 10% higher than the previous prices. It is the simple setting of demand and supply. They already fired over 40,000 people (a global expected number), so do you think that they will stop to consider your domestic needs over the bubble they call AI, to show that they can actually fuel that setting? Gimme a break.
So Youtube has a few video on surviving life in a setting where there is no energy, if that fails ask the people in the Ukraine. They have been battling that setting for some time.
Time to enjoy my dopamine rush and have a walk in a nice walk in the 83 degree Fahrenheit shadow. Makes me think about the hidden meaning behind 451 Fahrenheit by Ray Bradbury. Wasn’t the hidden setting to stop questioning the reality of things and rely on populism? Isn’t that what we see at present? I admit that no books are being burned, but removing them from the view is as bad as burning them. Because when the media is ignoring energy needs, what does that spell in the mind of some? So have a great day and see what you can get that does not require electricity.
There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them.
So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water.
All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions).
A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?
Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat.
So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.
A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.
Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.