You see, the article is about the AI bubble and a few other settings. So at times, I want Grok to take a look. No matter what you think, it tends to be a decent solution in DML and I reckon that Elon Musk with his 500,000 million (sounds more impressive then $500B) has sunk a pretty penny in this solution. I have seen a few shortcomings, but overall a decent solution. As I personally see it (for as far as I have seen it) that solution has a problem looking into and through multidimensional viewpoints. That is how I usually take my writing as I am overwhelmed at times with the amount of documentation I go through on a daily basis. As such I got a nice surprise yesterday.
So the story goes of with war hammers (a hidden stage there) then I go into the NPR article and I end up with the stage of tourism (the cost as the Oxford Economics report gives us) and I am still digging into that. But what does Grok give me?
The expert mode gives us:
The linked post directs to a blog by Lawrence van Rijn, a self-styled “Law Lord to be,” where he explores a narrative blending legal aspirations with speculative commentary, notably predicting FIFA might shift the 2026 World Cup from the U.S. to Saudi Arabia due to visa issues and a $435 fee delay, a theory unsupported by official FIFA statements but echoing concerns from a Travel.com article about U.S. tourism losses.
Van Rijn’s blog ties this speculation to an unusual metaphor of “war hammers” clashing on armor, symbolizing economic and technological tensions, particularly an AI bubble warned about in an NPR report (November 23, 2025), which cites MIT economist Daron Acemoglu’s research showing only 3% of people pay for AI services, challenging the trillion-dollar investments by tech giants like OpenAI.
His energy shortage concern aligns with a U.S. Energy Information Administration forecast of a 45% electricity price hike in Texas by 2026 due to data center demand, a trend overlooked by mainstream media focused on AI hype, suggesting a potential societal cost as power grids strain under unreported load increases.
Now, in the article I never mentioned FIFA, the 2026 World Cup or Saudi Arabia, so how did this program come to this? Check out the blog, none of those elements were mentioned there. As some tell us Grok is a generative artificial intelligence (generative AI) chatbot developed by xAI. So where is that AI program now? This is why I made mention in previous blogs that 2026 will be the year that the class actions will start. In my case, I do not care and my blog is not that important, even if it was, it was meant for actual readers (the flesh and blood kind) and that does not apply to Grok. I have seen a few other issues, but this yesterday and in light of the AI bubble story yesterday (17 hours ago) pushed this to the forefront. I could take ‘offense’ to the “self-styled “Law Lord to be”” but whatever and I have been accused of a lot worse by actual people too. And the quote “this speculation to an unusual metaphor of “war hammers”” shows that Grok didn’t see through my ruse either (making me somewhat proud), which is ego caressing at best, but I have an ego, I merely don’t let it out to often (it tends to get a little too frisky with details) and at present I see an idea that both the UAE and Saudi Arabia could use in their entertainment. There is an upgrade for Trojena (as I see it), there are a few settings for the Abu Dhabi Marina as well. All in a days work, but I need to content with data to see how that goes. And I tend to take my ideas into a sifter to get the best materials as fine as possible, but that was today, so there will be more coming soon enough.
But what do you do when an AI system bleeds information from other sources? Especially when that data is not validated or verified and both seem to be the case here. As I see it, there is every chance that some will direct these AI systems to give the wrong data so that these people can start class actions. I reckon that not too many people are considering this setting, especially those in harms way. And that is the setting that 2026 is likely to bring. And as I see it, there will be too many law firm of the ambulance chaser kind to ignore this setting. That is the effect that 8 figure class actions tend to bring and with the 8 figure number I am being optimistic. When I see what is possible there is every chance that any player in this field is looking at 9 or even 10 figure settlements, especially when it concerns medical data. And no matter what steps these firms make, there will be an ambulance chaser who sees a hidden opportunity. Even if there is a second tier option where a Cyber attack can launch the data into a turmoil, those legal minds will make a new setting where those AI firms never considered the implications that it could happen.
I am not being dramatic or overly doom speaking. I have seen enough greed all around me to see that this will happen. A mere three months ago we saw “The “Commonwealth Bank AI lawsuit” refers to a dispute where the Finance Sector Union (FSU) challenged CBA for misleading staff about job cuts related to an AI chatbot implementation. The bank initially made 45 call centre workers redundant but later reversed the decision, calling it a mistake after the union raised concerns at the Fair Work Commission. The case highlighted issues of transparency, worker support, and the handling of job displacement due to AI.” So at that point, how dangerous is the setting that any AI is trusted to any degree? And that is before some board of directors sets the term that these AI investments better pay off and that will cause people to do silly (read: stupid) things. A setting that is likely to happen as soon as next year.
And at this time, Grok is merely ploughing on and set the stage where someone will trust it to make life changing changes to their firm, or data and even if it is not Grok, there is all the chances that OpenAI will do that and that puts Microsoft in a peculiar stage of vulnerable.
Have a great day, time for some ice cream, it was 33 degrees today, so my living room is hot as hell, as such ice cream is my next stage of cooling myself.
It is a specific sound, nothing compares to that and it isn’t entirely fictional. Some might remember the Walter Hill movie Streets of Fire (1984) where two men slug it out with hammers, but that is not it. When a Warhammer slams into metal armor, the armor becomes a drum and that sound is heard all over the battlefield (the wearer of that armour hears a lot more than that sound) but is distinct and I reckon that some of those hammer wielders would have created some kind of crescendo on these knights. So that was ‘ringing’ in my ears when NPR gave us ‘Here’s why concerns about an AI bubble are bigger than ever’ a few days ago (at https://www.npr.org/2025/11/23/nx-s1-5615410/ai-bubble-nvidia-openai-revenue-bust-data-centers) and what will you know. They made the same mistake, but we’ll get to that.
The article reads quite nicely and Bobby Allyn did a good job (beside the one miss) but lets get to the starting blocks. It starts with “A frothy time for Huang, to be sure, which makes it all the more understandable why his first statement to investors on a recent earnings call was an attempt to deflate bubble fears. “There’s been a lot of talk about an AI bubble,” he told shareholders. “From our vantage point, we see something very different.”” So then we get three different names all giving ‘their’ point of view with ““The idea that we’re going to have a demand problem five years from now, to me, seems quite absurd,” said prominent Silicon Valley investor Ben Horowitz, adding: “if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.” Appearing on CNBC, JPMorgan Chase executive Mary Callahan Erdoes said calling the amount of money rushing into AI right now a bubble is “a crazy concept,” declaring that “we are on the precipice of a major, major revolution in a way that companies operate.” Yet a look under the hood of what’s really going on right now in the AI industry is enough to deliver serious doubt, said Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy.” All three names give a nice ‘presentation’ to appease the rumblings within an investor setting. Ben Horowitz, Mary Callahan Erdoes and Paul Kedrosky are seemingly mindset on raking in whatever they can and then the fourth shines a light on this (not in the way he intended) we see “Take OpenAI, the ChatGPT maker that set off the AI race in late 2022. Its CEO Sam Altman has said the company is making $20 billion in revenue a year, and it plans to spend $1.4 trillion on data centers over the next eight years. That growth, of course, would rely on ever-ballooning sales from more and more people and businesses purchasing its AI services.” Did you see the setting. He is making 20 billion and investing $1.4 trillion, now that represents a larger slice and the 20 billion is likely to make more (perhaps even 100 billion a year. And now the sides of hammers are slamming into armour. That still will take 14 years to break even and does anyone have any idea how long 14 years is and I reckon that $1.4 trillion represents (at 4.5%) implies that the interest is $63,000,000,000. That is almost the a year of revenue and that is the hopefully glare if he is making 100 billion a year. So what gives with this, because at some point investors make the setting that the formula is off. There is no tax deductibility. That is money that is due, the banks will get their dividend and whomever thinks that all this goes at zero percent is ludicrously asleep and that is before the missing element comes out.
So then in comes Daron Acemoglu with “A growing body of research indicates most firms are not seeing chatbots affect their bottom lines, and just 3% of people pay for AI, according to one analysis. “These models are being hyped up, and we’re investing more than we should,” said Daron Acemoglu, an economist at MIT, who was awarded the 2024 Nobel Memorial Prize in Economic Sciences.” He comes at this from another angle and gives us that we are investing more than we should. All these firms are seeing the pot at the end of the rainbow, but there is the hidden snag, we learned early in life that the rainbow is the result of sunlight on rainwater and it is always curves t be ‘just’ beyond the horizon and it never hits the ground and there will be no pot of gold at the end of it according to Lucky the Leprechaun (I have his fax number) but that was not the side I am aiming for, but it gives the idiocy we see at present. They are all investing too much into something that does not yet exist, but that is beside the point. There are massive options for DML and LLM solutions, but do you think that this is worth trillions? It follows when we get to “Nonetheless, Amazon, Google, Meta and Microsoft are set to collectively sink around $400 billion on AI this year, mostly for funding data centers. Some of the companies are set to devote about 50% of their current cash flow to data center construction.
Or to put it another way: every iPhone user on earth would have to pay more than $250 to pay for that amount of spending. “That’s not going to happen,” Kedrosky said.” This comes from Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy, and he is right. But that too is not the angle I am going for. But there are two voices, both in their field of vision, something they know and they are seeing the edges of what cannot be contained, one even got a Nobel Memorial Prize for his efforts (past accomplishment) And I reckon all these howling bitches want their government to ‘safe’ them when the bough breaks on these waves. So Andy Jassy, Sundar Pichai, Mark Zuckerberg and Satya Nadella (Amazon, Google, Meta and Microsoft) will expect the tax system to bail them out and there is no real danger to them, they might get fired but they’ll survive this. Andy Jassy is as far as I know the poorest of the lot and he has 500 million, so he will survive in whatever place he has. But that is the danger. The investors and the taxpayers (you and me) get to suffer from this greed filled frenzy.
But then we get “Analyst Gil Luria of the D.A. Davidson investment firm, who has been tracking Big Tech’s data center boom, said some of the financial maneuvers Silicon Valley is making are structured to keep the appearance of debt off of balance sheets, using what’s known as “special purpose vehicles.””, as well as “The tech firm makes an investment in the data center, outside investors put up most of the cash, then the special purpose vehicle borrows money to buy the chips that are inside the data centers. The tech company gets the benefit of the increased computing capacity but it doesn’t weigh down the company’s balance sheet with debt.” And here we get another failure. It is the failure of the current administration that does not adapt the tax laws to shore up whatever they have for whatever no one has and that is the larger stakeholder in this. We get this in an example in the article stating “Blue Owl Capital and Meta for a data center in Louisiana”, this is only part of the equation. You see, they are ’spreading the love’ around because that is the ‘safe’ setting and they know what comes next. You see the Verge gave us ‘Nvidia says some AI GPUs are ‘sold out,’ grows data center business by $10B in just three months’ (at https://www.theverge.com/tech/824111/nvidia-q3-2026-earnings-data-center-revenue) and that is the first part of the equation. What do you think will power all this? That is the angle I am holding onto. All these data centers will need energy and they will take it away from the people like you and me. And only 4 hours ago we see ‘Nvidia plays down Google chip threat concerns’ and it is all about the AI race, which is as I said non-existent, but the energy required to field these hundreds of thousands of GPU’s is and no one is making a table of what is required to fuel these data centers because it is not on ‘their plate’ but the need for energy becomes real and really soon too. We do not have the surplus to take care of this and when places like Texas give us “Electricity demand is also going up, with much of it concentrated in Texas due to “data centers and cryptocurrency mining facilities,”” with the added “Driving the rise in wholesale prices next year is primarily a projected 45% increase at the Electric Reliability Council of Texas-North pricing hub. “Natural gas prices tend to be the biggest determinant of power prices,” the EIA said. “But in 2026, the increase in power prices in ERCOT tends to reflect large hourly spikes in the summer months due to high demand combined with relatively low supply in this region.”” Now this is not true for the whole world, but we see here a “projected 45% increase” and that is for 2026. So where are these data centers, what are their energy surpluses and what is to come? No one is looking at that, but when any data centre is hit with a brownout, or a partial and temporary drop in voltage in an electrical power supply. When that happens any data centre shuts down, energy is adamant for all its GPU’s and their better not we any issue with energy and I saw this a year ago, so why isn’t the media looking into this? I saw one article that that question was not answered and the media just shoved it aside, but as I see it, it should be on the forefront of any media setting. It will happen and the people will suffer, but as I see it (and mentioned) is that the media is whoring for digital dollars and they need their advertisement money from these 4 places and a few more, all ready for advertisement attention and the media plays ball because they want their digital dollars (as I personally see it).
So whilst the NPR article is quite nice, the one element missing is what makes this bubble rear its ugly head, because too many want their coins for their effort and it is what is required. But what does the audience require? And the audience is you an me dear reader. I have set a lot of my requirements to energy falling short, but there is only so much I can do and it is going to be 32 degrees (celsius) today, so what happens when the energy slows down for 5.56 million people in Sydney? Because the Data centers will make a first demand from their energy providers or they will slap a lawsuit worth billions on that energy provider. And we the people (wherever we are) are facing what comes next. Keeping data centers cool and powered whilst we the people boil in our own homes. As such that is the future I am predicting and people think I am wrong, but did they make the calculation of what these data centers require? Are they seeing the energy shortfalls that are impeding these data centers? And the energy providers will take the money and the contracts because it won’t coexist to this, but that is exactly what we are facing in the short run and the investors? Well, I don’t really care about them, they invested and if you aren’t willing to lose it all with a mere card to help you through (card below), you aren’t a real investor, you are merely playing it safe and in that world there are no bubbles.
Remind me, how did that end in 2008? The speculated cost were set to $16 trillion in U.S. household wealth, and this bubble is significantly larger than the 2008 one and this time they are going all in on money, most of them do not have. So that is what is coming and my fears do not matter, but the setting that NPR gives us all with ‘Here’s why concerns about an AI bubble are bigger than ever’ matters and that is what I see coming.
So have a great day and never trust one source, always verify what you read through other sources. That part was shown to be when we all see (from various sources) that “The United States is on track to lose $12.5 billion in international travel spending this year” whilst my calculations made it between 80 and 130 billion and some laughed at my predictions a few months earlier and I get that. I would laugh too when those ‘economics’ state one amount and I come with a number over 700% larger. I get that, but now (apparently) there is an Oxford economics report that gives us “Damning report says U.S. tourism faces $64 billion blow as Trump administration’s trade wars drive away foreign visitors and cut spending”, so I have that to chase down now, but it shows that my numbers were mostly spot on, at least a lot better than whatever those economics are giving you. So never trust merely one source even if they believe to be on the right track. But that is enough about that and consider why some bubble settings are underexposed and when you see that the NPR gave you three additional angles and missed mine (likely not intentional) consider what those investment firms are overseeing (likely intentional) because the setting that they are willing to lose 100% is ludicrous, they have settings for that and as the government bailed them out the last time, they think it will save them this time too.
Have a great day today, I need an ice cream at 4:30 in the morning. I still have some, so yay me.
That is what seems to be happening. The first one was a simple message that Oracle is doom headed according to Wall Street (I don’t agree with that), but it made me take another look and to make it simpler I will look at the articles chronologically.
The first one was the Wall Street Journal (4 days ago), with ‘Oracle Was an AI Darling on Wall Street. Then Reality Set In’ (at https://www.wsj.com/tech/oracle-was-an-ai-darling-on-wall-street-then-reality-set-in-0d173758) with “Shares have lost gains from a September AI-fueled pop, and the company’s debt load is growing” with the added “Investors nervous about the scale of capital that technology companies are plowing into artificial-intelligence infrastructure rattled stocks this week. Oracle has been one of the companies hardest hit” but here is the larger setting. As I see it, these stocks are manipulated by others, whomever they are Hedge funds and their influencers and other parties calling for doom all whilst the setting of the AI bubble are exploiters by unknown gratifiers of self. I know that this sounds ominous and non specific, but there is no way most of us (including people with a much higher degree of economic knowledge than I will ever have) And the stage of bubble endearing is out there (especially in Wall Street) then 14 hours ago we get ‘Oracle (ORCL): Evaluating Valuation After $30B AI Cloud Win and Rising Credit Risk Concerns’ (at https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-evaluating-valuation-after-30b-ai-cloud-win-and/amp) where we see “Recent headlines have only amplified the spotlight on Oracle’s cloud ambitions, but the past few months have been rocky for its share price. After a surge tied to AI-driven optimism, Oracle’s 1-month share price return of -29.9% and a year-to-date gain of 19.7% tell the story: momentum has faded sharply in the near term. However, the 1-year total shareholder return still sits at 4.4% and its five-year total return remains a standout at nearly 269%. This combination of volatility and long-term outperformance reflects a market grappling with Oracle’s rapid strategic shift, balance sheet risks, and execution on new contracts.” I am not debating the numbers, but no one is looking to the technology behind this. As I see it places like Snowflake and Oracle have the best technology for these DML and LLM solutions (OK, there are a few more) and for now, whomever has the best technology will survive the bubble and whomever is betting on that AI bubble going their way needs Oracle at the very least and not in a weakened state, but that is merely my point of view. So last we get the Motley Fool a mere 7 hours ago giving us ‘Billionaire David Tepper Dumped Appaloosa’s Stake in Oracle and Is Piling Into a Sector That Wall Street Thinks Will Outperform’ (at https://www.fool.com/investing/2025/11/23/billionaire-david-tepper-dumped-appaloosas-stake-i/) we see “Billionaire David Tepper’s track record in the stock market is nothing short of remarkable. According to CNBC, the current owner of the Carolina Panthers pro football team launched his hedge fund Appaloosa Management in 1993 and generated annual returns of at least 25% for decades. Today, Tepper still runs Appaloosa, but it is now a family office, where he manages his own wealth.” Now we get the crazy stuff (this usually happens when I speculate) So this gives us a person like David Tepper who might like to exploit Oracle to make it seem more volatile and exploit a shortening of options to make himself (a lot) richer. And when clever people become self managing, they tend to listen to their darker nature. Now I could be all wrong, but when Wall Street is going after one of the most innovative and secure companies on the planet just to satisfy the greed of Wall Street, I get to become a little agitated. So could it all be that Oracle was drawn into the ‘fab’ and lost it? No, they clearly stated that there would be little return until 2028, a decent prognosis and with the proper settings of DML and LLM finding better and profitable ways by 2027 to find revenue making streams is a decent target to have and it is seemingly an achievable one. In the meantime IBM can figure out (evolve) their shallow circuits and start working on their trinary operating system. I have no idea where they are at present, but the idea of this getting ready for a 2040 release is not out of the question. In the meantime Oracle can fill the void for millions of corporations that already have data, warehouses and form settings. Another are plenty of other providers of data systems.
So when we are given “The tech company Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI), thanks to its specialized data centers that contain huge clusters of graphics processing units (GPUs) to train large language models (LLMs) that power AI.
In September, the company reported strong earnings for the first quarter of its fiscal 2026, along with blowout guidance. Remaining performance obligations increased 359% year over year to $455 billion, as it signed data center agreements with major hyperscalers, including OpenAI.”
So whilst we see “Oracle is not one of the “Magnificent Seven,” but it has emerged as a strong beneficiary of artificial intelligence (AI)” we need to take a different look at this. Oracle was never a strong beneficiary of AI, it was a strong vendor with data technologies and AI is about data and in all of this, someone is ‘fitting’ Oracle into a stage that everyone just blatantly accepts without asking too many questions (example the Media). With the additional “to train large language models (LLMs) that power AI”, the hidden gem is in the second statement. AI and LLM are not the same, You only partially train real AI, this is different and those ‘magnificent seven’ want you to look away from that. So, when was the last time that you actually read that AI does not yet exist? That is the created bubble and players like Oracle are indifferent to this, unless you spike the game. It has stocks, it has options and someone is turning influencers to their own use of greed. And I object to this, Oracle has proven itself for decades, longer than players like Microsoft and Google. So when we see ‘Buying the sector that Wall Street is bullish on’ we see another hidden setting. The bullishness of Wall Street. Do you think they don’t know that AI is a non-existing setting? So why go after the one technology that will make data work? That setting is centre in all this and I object those who go after Oracle. So when you answer the call of reality consider who is giving you the AI setting and who is giving you the DML/LLM stage of a data solution that can help your company.
Have a great day we are seemingly all on Monday at present.
OK, I am over my anger spat from yesterday (still growling though) and in other news I noticed that Grok (Musk’s baby) cannot truly deal with multidimensional viewpoints, which is good to know. But today I tried to focus on Oracle. You know whatever AI bubble will hit us (and it will) Oracle shouldn’t be as affected as some of the Data vendors who claim that they have the golden AI child in their crib (a good term to use a month before Christmas). I get that some people are ‘sensitive’ to doom speakers we see all over the internet and some will dump whatever they have to ‘secure’ what they have, but the setting of those doom speakers is to align THEIR alleged profit needs to others dumping their future. I do not agree. You see Oracle, Snowflake and a few others offer services and they are captured by others. Snowflake has a data setting that can be used whether AI comes or not, whether people need it or not. And they will be hurt when the firms go ‘belly up’ because it will count as lost revenue. But that is all it is, lost revenue. And yes both will be hurting when the AI bubble comes crashing down on all of us. But the stage that we see is that they will skate off the dust (in one case snow) and that is the larger picture. So I took a look at Oracle and behold on Simple Wall Street we get ‘Oracle (ORCL) Is Down 10.8% After Securing $30 Billion Annual Cloud Deal – Has The Bull Case Changed?’ (At https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-is-down-108-after-securing-30-billion-annual-clo) With these sub-line points:
Oracle recently announced a major cloud services contract worth US$30 billion annually, set to begin generating revenue in fiscal 2028 and nearly tripling the size of its existing cloud infrastructure business.
This deal offers Oracle significantly greater long-term growth visibility and serves as a major endorsement of the company’s aggressive cloud and artificial intelligence strategy, even as investors remain focused on rising debt and credit risks.
We’ll examine how this multi-billion-dollar cloud contract could reshape Oracle’s investment narrative, particularly given its bold AI infrastructure expansion.
So they triple their ‘business’ and they lose 10.8%? It leads to questions. As I personally see it, Wall Street is trying to insulate themselves from the bubble that other (mostly) software vendors bring to the table. And Simply Wall Street gives us “To believe in Oracle as a shareholder right now is to trust in its transformation into a major provider of cloud and AI infrastructure to sustain growth, despite high debt and reliance on major AI customers. The recent announcement of a US$30 billion annual cloud contract brings welcome long-term visibility, but it does not change the near-term risk: heavy capital spending and dependence on sustained AI demand from a small set of large clients remain the central issues for the stock.” And I can get behind that train of thought, although I think that Oracle and a few others are decently protected from that setting. No matter how the non existent AI goes, DML needs data and data needs secure and reliable storage. So in comes Oracle in plenty of these places and they do their job. If 90% business goes boom, they will already have collected on these service terms for that year at least, 3-5 years if they were clever. So no biggy, Collect on 3-5 years is collected revenue, even if that firm goes bust after 30 days, they might get over it (not really).
And then we get two parts “Oracle Health’s next-generation EHR earning ONC Health IT certification stands out. This development showcases Oracle’s commitment to embedding AI into essential enterprise applications, which supports a key catalyst: broadening the addressable market and stickiness of its cloud offerings as adoption grows across sectors, particularly healthcare. In contrast, investors should be aware that the scale of Oracle’s capital commitment brings risks that could magnify if…” OK, I am on board with these settings. I kinda disagree, but then I lack economic degrees and a few people I do know will completely see this part. You see, I personally see “Oracle’s commitment to embedding AI into essential enterprise applications” as a plus all across the board. Even if I do believe that AI doesn’t exist, the data will be coming and when it is ironed out, Oracle was ready from the get go (when they translate their solutions to a trinary setting) and I do get (but personally disagree) with “the scale of Oracle’s capital commitment brings risks that could magnify if”. Yes, there is risk but as I see it Oracle brings a solution that is applicable to this frontier, even if it cannot be used to its full potential at present. So there is a risk, but when these vendors pay 5 years upfront, it becomes instant profit at no use of their clouds. You get a cloud with a population of 15 million, but it is inhabited by 1.5 million. As such they have a decade of resources to spare. I know that things are not that simple and there is more, but what I am trying to say is that there is a level of protection that some have and many will not. Oracle is on the good side of that equation (as is Snowflake, Azure, iCloud, Google Gemini and whatever IBM has, oh, and the chips of nVidia are also decently safe until we know how Huawei is doing.
And the setting we are also given “Oracle’s outlook forecasts $99.5 billion in revenue and $25.3 billion in earnings by 2028. This is based on annual revenue growth of 20.1% and an earnings increase of $12.9 billion from current earnings of $12.4 billion” matters as Oracle is predicting that revenue comes calling in 2028, so anyone trying to dump their stock now is as stupid as they can be. They are telling their shareholders that for now revenue is thimble sized, but after 2028 which is basically 24 months away, the big guns come calling and the revenue pie is being shared with its shareholders. So you do need brass balls to do this and you should not do this with your savings, that is where hedge funds come in, but the view is realistic. The other day I saw Snowflake use DML in the most innovative way (one of their speakers) showed me a new lost and found application and it was groundbreaking. Considering the amounts of lost and found is out there at airports and bus stations, they showed me how a setting of a month was reduced to a 10 minute solution. As I saw it, places like Dubai, London and Abu Dhabi airport could make is beneficial for their 90 million passengers is almost unheard of and I am merely mentioning three of dozens upon dozens of needy customers all over the world. A direct consequence of ‘AI’ particulars (I still think it is DML with LLM) but no matter the label, it is directly applicable to whomever has such a setting and whilst we see the stage of ‘most usage fails in its first instance’ this is not one of them and as such in those places Oracle/Snowflake is a direct win. A simple setting that has groundbreaking impact. So where is the risk there? I know places have risks, but to see this simple application work shows that some are out there showing the good fight on an achievable setting and no IP was trained upon and no class actions are to follow. I call that a clear win.
So, before you sell your stock in Oracle like a little girl, consider what you have bought and consider who wants you to sell, and why, because they are not telling you this for your sake, they have their own sake. I am not telling you to sell anything. I am merely telling you to consider what you bought and what actual risks you are running if you sell before 2029. It is that simple.
Have a great day (yes Americans too, I was angry yesterday), These bastards in Vancouver and Toronto are still enjoying their Saturday.
This happens to all of us, you, me, everybody with a soul and a decent setting towards ethical boundaries. So when I heard yesterday about the Ukrainian setting, I kinda lost it, but I refrained from acting until I had most of the evidence.
Ukrainian President Volodymyr Zelenskyy vowed he would not betray his country’s national interest, as he addressed news of a US-led peace plan to end Russia’s war.
The 28-point plan that America has presented to Ukraine endorses some of Moscow’s demands and leaves security guarantees for Kyiv vague.
Mr Zelenskyy has said Ukraine now faces the difficult choice to either “losing dignity or risk losing a major partner”.
And we get in addition “after the US presented Kyiv with a peace plan that endorses key Russian demands. Speaking in the street outside his office, a location he uses only rarely for major addresses, the Ukrainian president said his country was trying to preserve its freedom while retaining the support of its most important ally.”
This is where I kinda lost it. This president Joker (his new nick name), this 6 times loser hands a helping hand to Russia?
He sided with Russia (Against Ukraine)
He threatened members of congress with hanging. (Sens. Elissa Slotkin of Michigan, Mark Kelly of Arizona, Reps. Jason Crow of Colorado, Chris Deluzio and Chrissy Houlahan of Pennsylvania and Maggie Goodlander of New Hampshire)
He is pressuring Canada (51st state remarks for a start)
He is allegedly illegally attacking nations with tariffs
He is a convicted felon (on over 25 counts)
He failed the American Economy (Tourism)
I am now calling on the Swedish Nobel committee to deny him any awards (especially the peace price) for the rest of his life. A person of this setting should not be awarded anything (except a dunce cap) Furthermore I call on any Commonwealth nation and any EU nation to give support to the Ukraine as best as you possibly can. I released several IP parts that could end Russian nuclear reactors as well as sink their naval capacity. I also have an option to take away their airfare in a new and innovative way, but that is still in the works. Russia has over 1,000 airports and I figured on a drone setting that could end that nice setting to the bulk of them, what a lovely surprise it would be if these ‘supersets’ cannot take of, a slim setting, but there you have it. DARPA was so set on finding military solutions that they seemingly forgot about the other weaknesses the airforce tend to have.
More important is the message that I and many like me support President Volodymyr Zelenskyy and the Ukrainian people, on a lighter note, who would not support this Paddington bear (2014, 2017) when it comes to it.
And the setting that Washington gave the Ukraine, that they agreed with Russia without Ukraine is a “Washington has presented Kyiv with a 28-point plan, which calls for Ukraine to cede territory, accept limits to its military and renounce ambitions to join NATO”, so how about limit Russian forces by making 1,000 airfields unusable? How about making naval options (including merchant navy) options obsolete and redundant? And how about NATO gets to Ukraine in the next 7 days? I reckon this is only possible with British, Dutch and German forces coming together on this. France will become the buffer army for European territory.
Am I angry enough? Well, I still have the option to making the nuclear reactors meltdown on itself and that if functional could give the Russian people a new consideration of cold, February should be frisky in Russia, so there we have it, I might not be some kind of Sylvester Stallone, but I used to be a decent marksman and there is nothing wrong with my innovative creativity, so let’s have fun on this and after that all barrels will be pointed on America for siding with Russia. I am calling for a complete segregation of economic assistance of America. Good and services. Canada is doing its part, lets see what the rest can do. When no one hands them oil, their own oil will support them and that is costing them dearly. There is no need to export their oil and get cheap oil abroad. They can all fuel themselves in America.
I am actually this angry. If you are not an American, have a great day.
Like Poodles, I seem to have misplaced my marbles. AKA I lost them completely. Now only 9 hours ago I shouted that I am sick of the AI bubble, but a few minutes ago I got called back into that fray. You see, I was woken up by an image.
This is the image and it gives us ‘Oracle’s $300bn OpenAI deal is now valued at minus $74bn’ there is no way this is happening. You see, I have clearly stated that the bubble is coming. But in this, Oracle has a set state of technologies it is contributing. As such, where is the bubble blowing up in the face of OpenAI and Microsoft? In this, the Financial Times (at https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89) is giving us ‘Oracle is already underwater on its ‘astonishing’ $300bn OpenAI deal’. So where is the damager to the other two? We are given “OK, yes, it’s a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $60bn loss figure is not entirely wrong. Oracle’s “astonishing quarter” really has cost it nearly as much as one General Motors, or two Kraft Heinz. Investor unease stems from Big Red betting a debt-financed data farm on OpenAI, as MainFT reported last week. We’ve nothing much to add to that report other than the below charts showing how much Oracle has, in effect, become OpenAI’s US public market proxy:” There might be some loss on Oracle (if that happens) and later on we were given (after a stack of graphics, see the story for that) “But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September. Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle.” And in this, I still have doubts on the reporting side of things. From my own feelings (not hard core numbers) that Oracle and Amazon are the best players to survive this as their technology is solid. When AI does come, they are likely the only two to set it right and the entire article goes out of its way to mention Microsoft. But in all this Microsoft has made significant investments in OpenAI and has rights to OpenAI’s Intellectual Property (IP). This comes down to Microsoft holding a stake in OpenAI’s for-profit arm, OpenAI Group PBC, valued at approximately $135 billion, which represents about 27% of the company. So how is Microsoft not mentioned?
As such how come Oracle is underwater? Is it testing scuba gear? And if the article is indeed true, what is the value of OpenAI now? Because that will also drown the 27% of it (holding the name Microsoft) and that image is missing from that equation. If this is the bubble bursting, which might be true (a year before I predicted it) then it stands to rights that this is also impacting Amazon, Google, IBM, Microsoft and OpenAI. As such this article seems a little far fetched, a little immature and largely premature by now naming all the players in this game. I personally thought that Oracle would be one of the winners in all of this, or better stated a smallest loser in this multi trillion bubble.
So what gives? And in this I might be incorrect and largely missing the point, but a write-off to the amount of nearly half a trillion dollars has more underwriters and mentioning merely Oracle is a little far fetched, no matter how fashionable they all seem to be and for that matter as Microsoft has been ‘advocating’ their copilot program, how deep are they in? Because the Oracle write-off will be squarely in the face of that Nadella dude. As he seemingly already missed the builder.ai setting, this might be the one ending his career and whatever comes next might want to commit suicide instead of accepting whatever promotion is coming his way. (I know it is a dark setting) but the image is a little disconcerting at present. And the images that the Financial Times give us, like the Hyperscaler capex, show Microsoft to be 3 times in deeper water than Oracle is, so why aren’t they mentioned in the text? And in those same images Amazon are in way over their heads and that is merely the beginning of a bubble going sideways on everyone. As such, is this a storm in a cup of water? If that is so, why is Oracle underwater? And there is ample reason to see me as a non-economist, I never was on wanted to be one. But the media as gives raises questions. And I agree, Oracle is on a long way to break even, but if they do not, neither are Amazon, Microsoft and OpenAi and that part is seemingly missing too. If anything, Larry Ellison could pay the shortcomings with his petty cash (he allegedly has 250,000 million) that is how own die and the others won’t even come near that amount.
So whilst we wait for someone to make sense of this all, we need to walk carefully and not panic, because these settings tend to be the stage where the panicky people sell what they can for dimes to the dollar and that is not how I want to see players like Microsoft jump that shark. This is not any kind of anti-Microsoft deal, it is them calling the others not innovative whilst there isn’t a innovative bone in that cadaver. So whilst we want to call the cards. The only thing I do is calling the cards of the Financial Times and likewise reporting media calling out the missing settings of loss towards Microsoft and OpenAI. It is the best I can do, I know an economic major who could easily do that, but he is busy running Canada at the moment.
Have a great day and I apologize for causing an optional panic, which was not my intention.
That is the question I had today/this morning. You see, I saw a few things happen/unfold and it made me think on several other settings. To get there, let me take you through the settings I already knew. The first cog in this machine is American tourism. The ‘setting’ is that THEY (whoever they are) expect a $12.5 billion loss. The data from a few sources already give a multitude of that, the airports, the BNB industry and several other retail settings. Some give others the losses of 12 airports which goes far beyond the $12.5 billion and as I saw it that part is a mere $30-$45 billion, its hard to be more precise when you do not have access to the raw numbers. But in a chain trend Airfares, visas, BNB/hotels, snacks/diversities, staff incomes I got to $80-$135 billion and I think that I was being kind to the situation as I took merely the most conservative numbers, as such the damage could be decently more.
This is merely the first cog. Second is the Canadian setting of fighters. They have set their minds on the Saab Gripen s such I thought they came for
Silly me, Gripen means Griffin and a Hogwarts professor was eager to assist me in this matter, it was apparently
Although I have no idea how it can hide that proud flag in the clouds. What does matter that it comes with “SAAB President and CEO Micael Johansson told CTV News that the offer is on the table and Ottawa might see a boost in economic development with the added positions. The deal could be more than just parts and components; Canada may even get the go-ahead to assemble the entire Gripen on its soil.” (Initial source: CTV news) this brings close to 10,000 jobs (which was given by another source) but what non-Canadian people ‘ignore’ is that this will cost the American defense industry billions and when these puppies (that what they call little Griffins) are built in Canada, more orders will follow costing the American Defense industry a lot more. So whilst some sources say that “American tourism is predicted to start a full recovery in 2029” I think that they are overly confident that the mess this administration is making is solved by then. I think that with Vision 2030 and a few others, recovery is unlikely before 2032. And when you consider The news (at https://www.thetravel.com/fifa-world-cup-2026-usa-tourist-visa-integrity-fee-100-day-wait-time-warning-us-consul-general/) by Travel dot com, giving us ‘FIFA World Cup 2026 Travelers Warned Of $435 Fee And 100-Day Delay By U.S. Consul General’ that there is every chance that FIFA will pull the 2026 setting from America and it is my speculation that Yalla Vamos 2030 might be hosting the 2026 and leave 2030 to whomever comes next, which is Saudi Arabia, the initial thought is that they might not be ready at that time, but that is mere speculation from me and there is a chance (a small one) that Canada could step in and do the hosting in Vancouver, Toronto and Ottawa, but that would be called ‘smirking speculation’ But the setting behind these settings is that Tourism will likely collapse in America and at that point the Banks of Wall Street will cancel the Credit Cards of America for a really long time and that will set in motion a lot of cascading events all at the same time. Now if you would voice that this would never Tom’s Hardware gave us last week ‘Sam Altman backs away from OpenAI’s statements about possible U.S. gov’t AI industry bailouts — company continues to lobby for financial support from the industry’ If his AI is so spectastic (a combination of Fantastic and Spectacular) why does he need a bailout? And when we consider this. Microsoft once gave the AI builder a value of a billion dollars and they blew that in under a year on over 600 engineers. So why didn’t Microsoft see that? 600 engineers leave a digital footprint and they have licensed software. Microsoft didn’t catch on? And as we see the ‘unification’ of Microsoft and OpenAI have a connection. Microsoft has an investment in the OpenAI Group PBC valued at approximately $135 billion, representing a 27% stake. So there is a need to ask questions and when that bubble goes, America gets to bail that Windows 3.1 vendor out.
As I see it, don’t ever put all your eggs in one basket and at this point America has all the eggs of its ‘kingdom’ in one plastic bag and it reckon that bag is showing rips and soon enough the eggs fall away into an abyss where Microsoft can’t get to it. The resources will flee to Google, IBM, Amazon and a few other places and it is the other places that will reap havoc on the American economy. So when the tally is made, America has a real problem and this administration called the storm over its own head and I am not alone feeling this way. When you consider the validation and verification of data, pretty much the first step in data related systems you can see that things do not add up and it will not take long for others to see that too. And in part the others will want to prove that THEIR data is sweet and the way they do that is to ask questions of the data of others. A tell tale sign that the bubble is about to implode and at present it is given at ‘Global AI spend to total US$1.5 trillion’ (source: ARNnet) but that puppy has been blown up to a lot more as the speculators that they have a great dane, so when that bubble implodes it will cost a whole lot of people a lot of money. I reckon that it will take until 2026/2027 to hit the walls. Even as Forbes gave us less than 24 hours ago ‘OpenAI Just Issued An AI Risk Warning. Your Job Could Be Impacted’ and they talk about ASI (too many now know that AI doesn’t exist) where we see “Superintelligence is also referred to as ASI (artificial superintelligence) which varies slightly from AGI (artificial general intelligence) in that it’s all about machines being able to exceed even the most advanced and highly gifted cognitive abilities, according to IBM.” And we also get “OpenAI acknowledges the potential dangers associated with advancing AI to this level, and they continue by making it clear what can be anticipated and what will be needed for this experiment to be a safe success” so these statements, now consider the simple facts of Data Verification and Data Validation, when these parts are missing any ‘super intelligence’ merely comes across as the village idiot. I can already see the Microsoft Copilot advertisement “We now offer the copilot with everyones favourite son, the village idiot Clippy II” (OK, I am being mean, I loved my clippy in the Office 95 days) but I reckon you are now getting clued in to the disaster that is coming?
It isn’t merely the AI bubble, or the American economy, or any of these related settings. It is that they are happening almost at the same time, so a nasdaq screen where all the firms are shown in deep red showing a $10 trillion write-off is not out of the blue. That setting better be clear to anyone out there. This is merely my point of view and I might be wrong to read the data as it is, but I am not alone and more people are seeing the fringe of the speculative gold stream showing it Pyrite origins. Have a great day it is another 2 hours before Vancouver joins us on this Monday. Time for me to consider a nice cup of coffee (my personal drug of choice).
There are two things on my mind. The first one will be addressed after this. The second one was in my mind before I knew it. It is the stuff of nightmares. A setting that could collapse the entire Microsoft. Not for real, it is a story, a script and a far fetched one at that, but the idea has merit. To unleash global fear and mistrust on the slap of a keyboard? What is there not to like. It would be epic to say the least and why Microsoft? Simple, it has the most dodo inspiring population (those dreaming of extinction) And as such I set the idea in motion, but after I finished the other works. I put it here so that I do not forget it and the keywords are optical fibre, blacklight and Diatomite Celite, the simple keywords that can topple a presumptuous great setting. But that is enough of that. You see, I missed the news about 3 days ago (had other things on my mind) but it flew past my eyes today and I caught it this afternoon. The guardian (at https://www.theguardian.com/world/2025/nov/10/uae-says-it-will-not-join-gaza-stabilisation-force-without-clear-legal-framework) gives us ‘UAE refuses to join Gaza stabilisation force without clear legal framework’ it caught me surprised. The idea of a ‘stabilizing force’ without a clear legal framework seems adamant (wherever it is held). So when we are given “Plans for a UN-mandated international stabilisation force charged with disarming Hamas inside Gaza face growing opposition after the United Arab Emirates said it would not participate because it did not yet see a clear legal framework for the force.” So what are the Americans and the UN doing not setting a clear legal framework for this setting? With that setting we are also given “The UAE’s decision, announced by the senior envoy Dr Anwar Gargash at a conference in Abu Dhabi, reflects Arab doubts about the terms of a US-drafted resolution already distributed to diplomats at the UN in New York. The draft places an onus on a US-directed stabilisation force to be the principal means of imposing security in Gaza after Israel has left the territory.” My first question becomes, what is the UN doing? For years they are so hoping for peace and now it seems they haven’t even considered setting a legal framework for those in that mess? As for the second issue the idea comes that Hamas needs at least a legal framework, if not you are fighting lawlessness with more lawlessness and to see that come America is not that difficult to observe, but to see that setting come from the UN is a bit ghastly. So as such I would agree with the statement by “Dr. Anwar Gargash said: “The UAE does not yet see a clear framework for the stability force and under such circumstances will not participate, but will support all political efforts towards peace – and remain at the forefront of humanitarian aid.”” And as we consider this, the setting of Gaza is becoming less and less stable. So as I read “Neither the UN nor the 15-strong security council are given a supervisory role over the stabilisation force, overseeing the implementation of the resolution, a point largely overlooked by the draft text. Nothing is specified about the funding of this stabilisation mission, which, according to the Americans, should be largely borne by Gulf states, with Saudi Arabia taking the lead.” I wonder, why the UN didn’t set up a legal framework, for agreement, or for alteration, but as I see it, none of that seems to have been done, or at least the Guardian fails to report on it, but that is no attack or opposition to the Guardian. It merely got me by surprise and made me wonder why we are paying millions upon millions to the UN when we see a (seemingly and alleged) flaw like this.
So why a I wondering about this? As I see the world claiming Israel for the slaughter Hamas instilled. I also see the UN failing at its duty to cater to any solution. And the failures seem to be adding up, but that is my (with absolute lack of expertise on matters of diplomacy and the function of the United Nations) view on the matter. So what gives? And in all this, I completely agree with the position that Dr. Anwar Gargash is taking.
So have a great day and consider the legal framework you face at breakfast (everyone for himself/herself) and don’t take away the Labneh until you see the white in their eyes. But that is my flaky sense of humor. For now I have to consider the idea that there is a cable under the Indian ocean with my sense of innovative humor. Have fun everyone.
Well that is one way to say it. I am tired of all these AI bubble stories, I stand by my point of view and there is a slim chance that I am wrong, but to be honest. I very much fear that I will become like Jamie Shipley getting hauled over at the end by Standards and Poor (see the Big Short) and I am tired. This blog will bear out the correctness of my view over time, too many are in denial and I don’t care about the losses that they incur, they are not my losses and I told my piece on this. Time to move on and I still feel strongly about a possible solution for the Apple vision pro (or the Meta Quest Pro) and I created two solutions that could bring either a rather hefty revenue cycle. One in linguistic and one in entertainment (Based on Horizons Zero Dawn/Forbidden West), the fact is that the Vision Pro was discontinued production in late 2024, but the story behind its failure tells us more about the future of mixed reality than any marketing presentation ever could. And that is one reality that some can sprout, but I saw the idea for a global linguistic solution (via Ubisoft and Guerrilla Software) two solutions that could have worked out if people at Apple had any brain and I put it here as early as November 9th 2024, in ‘the easy lesson’ (at https://lawlordtobe.com/2024/11/09/the-easy-lesson/) but seemingly Apple took a runner, as such the idea would now fall to the Meta Quest teams (3 or Pro) and I reckon that Meta would love a linguistic solution they could vendor to the world (together with Ubisoft), I reckon that catering to billions would please both those teams. As such I was overthinking the two solutions I concocted. More in the trend of what books could be used to cater to the lessons and my first thought was to use ‘Famiglia Romana’ to introduce into the Latin setting, but also into the Italian side and even into the other languages. As such each language also include another language which they get for free. There is nothing like a free sample to get the juices going. And from there every language that has one book that is transferred to all other languages. As my setting catered to French, Italian, English, Portuguese, Japanese, Latin and Spanish (AC Hexe) and there is the crux, why wouldn’t they use the idea? How many people are trying to learn one of these languages? The idea that DML is being used for teaching languages is novel (to say the least and 80% of the solution already exist (for all languages mentioned) and that is where these players (as I personally see it) lack vision. So at what point do they look at any idea and think “That would never work” or perhaps it is connected to the idea that a little more work could unfold a lot more revenue and as Far as I now it, the people with pupils like dollar signs (salespeople) are never shy for inviting revenue to their front door, especially as 80% is already done. And they merely need to include parts of the game and it is all covered with NPC’s. So I don’t see the point of walking away from this. And it is not that I am a coward, but Ubisoft already OWNS the IP, so there is little I can do. The other IP (Guerrilla) might open another door (beyond the gaming side). They could set the people of these places (Carja, Banuk, Oseram) to Native Indian languages and there they could invite a whole new linguistic side and even propagate these languages to the world. I might have initially designed the game specifically to the Meta Quest based loosely on Pokemon Photo, but there was a lot more to be had there and the fact that I can see these IP’s evolve and the owners did not is pretty laughable.
And lets be clear, there is life and software solutions beyond AI and the sooner these people realise this, the better they are off (as Google and Amazon decided to leave on excess of 6 billion annually on the floor). It is a setting where we see that the French language has Sans Famille (Hector Malot), I read it in the 70’s on Dutch, as such I knew of the book and I loved it, but there also is Les Trois Mousquetaires (Three musketeers) by Alexandre Dumas and Les Misérables (Victor Hugo) that makes merely three books and there are so many more and as such the French version takes shape. And when you get through the languages released as I saw them and we have 5-6 books per language and bind them in mini games in these games, you get a true linguistic solution. And the DML solutions we have in customer support could measure the correctness of the participant. One s of several solution I thought through in this approach of software induced linguistic training skills. The fact that I saw through the solution that Ubisoft has and they remain at least partially blind to this is even more humorous, and that is beside Meta being seemingly asleep at the wheel.
So if there is nothing else, it is time to do some snoring as it is almost 03:00 in the morning. Have a great day today as for most of you it is Thursday and I am already on Friday.
That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things.
And example can be seen in
TABLES / v1 v2 v3 v4 v5 BY (LABELS) / count.
And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.
But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.
The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions.
The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.
So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best.
Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved.
As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won.
It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.
Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.