Tag Archives: OpenAI

Orchestration

That was on my mind when I was considering a few settings. Orchestration by the media no less. To get the full view to this, I need to explain a few items. The media has NO responsibility to print (or news talk) on any given subject. And there is something called Defamation by omission. 

So it does exist, but the setting is extremely difficult to prove. There are more provisions, but they will not be applicable to this setting. As such I leave them by themselves. So two weeks ago we got all that Code Red settings in regards to OpenAI, they were not giving us that they would have to WOW the audience, or was that me saying that? So a few days ago ChatGPT released 5.2 and as far as I can tell there are several dozens of articles, but only Wired gives us some of the goods

With: “OpenAI has introduced GPT-5.2, its smartest artificial intelligence model yet, with performance gains across writing, coding, and reasoning benchmarks. The launch comes just days after CEO Sam Altman internally declared a “code red,” a company-wide push to improve ChatGPT amid intense competition from rivals. “We announced this code red to really signal to the company that we want to marshal resources in one particular area, and that’s a way to really define priorities,” said OpenAI’s CEO of applications, Fidji Simo, in a briefing with reporters on Thursday. “We have had an increase in resources focused on ChatGPT in general.”” Publication and presentation talk, Sam Altman is great at that. But the media? Where are they? Who actually looked at them for the last few days? Where are those articles? 

I am not out for blood, or out to get Sam Altman, I am out to get the media. They are all about the danger setting, but this is becoming out of balance and the media loves their digital dollar raking, but enough is enough. They need to fess up to the settings and do something about it all. If ChatGPT 5.2 is great, fine. I don’t mind, but I want to get the goods and the media is falling short in several ways. Venezuela, OpenAI, Israel, Saudi Arabia and that list goes on, they are (as I personally see it) catering to their need for digital dollars as long as it agrees with the stakeholders they are reporting to.

The Wall Street Journal (at https://www.wsj.com/articles/openai-updates-chatgpt-amid-battle-for-knowledge-workers-995376f9) gives us “The release comes about a week after Chief Executive Sam Altman declared a “code red” effort to improve the quality of ChatGPT and to delay development of some other initiatives, including advertising. The company has been on high alert from the rising threat of Google’s latest Gemini AI model, which outperformed ChatGPT on certain benchmarks including expert-level knowledge, logic puzzles, math problems and image recognition. The new OpenAI model was described by the company as better at math, science and coding benchmarks.” And as I see it, nearly all the media gives exactly the same lines and no one is actually looking into how good ChatGPT is now, or even whether it is or is not. There are investors with Trillions on the line and the media is playing the “distancing game”, only when things go bad they are tripping over each other giving us the lines and at that point the stakeholders have the like it or lump it.

Is no one noticing that part of the equation? 

So, is GPT-5.2 the WOW result everyone is banking on? Did it defeat Gemini 3? I don’t know but the media should have been all over this and they aren’t. As I see it, this is a form of orchestration but to where I don’t know. Is it about the trillions invested (I see that as liability towards investors) is it about the absence of excellence (I see that as liability towards both Google and OpenAI) and there is the liability towards the readers or listeners of whoever they service. So this isn’t defamation, because in all, the media did nothing really wrong. But they sold us short whilst claiming they are there for us and they are not.

So is it me? Or is there is larger setting that is ignored by too many?

I know that some will not agree with me, but after the days of the Code Red, where are the media results of what OpenAI/Sam Altman produced? Not the same hundred words they all seemingly give us, but the real results, the real tests and the real impressions. I haven’t seen one result from them. Even with my limited knowledge (I never used ChatGPT) I could drum up a few tests in seconds and I would put both Gemini 3 and ChatGPT5.2 on the road. I could let them lose on a few of my articles and see what they both come up with and how long it takes them. Something EVERY baboon working in media (sorry, not sorry) could have come up with in mere seconds. Isn’t it lovely that they never came up with that? Think about that for a moment when they give you another runaround on Oracle, like Quartz ‘Oracle’s big AI dreams are freaking out Wall Street’ and Forbes with ‘Oracle Stock Down 14%. Why Higher Risk Makes $ORCL A Sell’ all whilst no one is looking at the true and real value of Oracle. No, the investors must be spooked (for whatever reason). So you all have a great day, we are nearly all in Saturday now and I am a mere 170 minutes away from Sunday. 

Leave a comment

Filed under Finance, IT, Media, Science

A Peter Sellers world

That is what hit me when I saw ‘How I Learned to Stop Worrying and Love the Bubble’ (source: Bloomberg) which comes from Dr Strangelove where we get “How I Learned to Stop Worrying and Love the Bomb” it started a larger set of thoughts. 

I didn’t use that article as Bloomberg uses a paywall. And it starts with yesterdays article in FXLeaders (at https://www.fxleaders.com/news/2025/12/07/oracles-ai-bubble-bursts-peak-glory-at-345-now-a-217-hangover/) where we see ‘Oracle’s AI Bubble Bursts: Peak Glory at $345, Now a $217 Hangover’ we are given “ORCL ended the week at $217.58, up 1.52 percent, but it still had a 37 percent hangover from its 52-week high of $345.72. This is a microcosm of growing concerns about debt loads, AI infrastructure spending, and whether the “infinite demand” narrative for AI compute can withstand real-world economics.” As well as “Oracle’s recent decline in stock value reflects broader market concerns regarding the high valuations of AI-related companies, as its forward price-to-earnings (P/E) ratio exceeds 33. The company projects revenues of $166 billion from cloud infrastructure and $20 billion. Investors adopted a “sell the news” mentality, raising questions about the sustainability of these forecasts. Oracle’s fundamentals remain solid. The company experienced  52% growth in cloud infrastructure and has $455 billion in remaining performance obligations (RPO), largely due to its partnership with OpenAI. Currently, the stock is trading at 13.9 times projected earnings for the end of this decade, leading some investors to view the decline as a potential buying opportunity.

As I see it Oracle passed their burst bubble setting. And whilst we see ups and downs, I would unreservedly trust the Oracle stock to be a beacon of steadiness. It might not be sexy, but it is a trustworthy sign for those who need a decent return on investment.

Or as Peter sellers would say:
As long as the roots are not severed, all is well. And all will be well in the garden. Yes! There will be growth in the spring!” (Source: Being there) it was a better time and weirdly enough the age of Peter Sellers applies to the days that 2025 brings. And from that setting we get to MyNews (at https://sc.mp/ihj4g) where we see ‘Why 2026 will be the year AI hype collides with reality’ an opinion piece that gives me “The reckoning ahead for the AI bubble promises to reprice expectations, force economic trade-offs and call out circular deals” but the stronger setting is given with “Speculative assumptions guiding trillions of US dollars in AI investments are colliding with real-world obstacles. Escalating costs, stratospheric stock valuations, tenuous collaborations and energy bottlenecks are compounding the inevitable challenges when new technologies struggle for profitability. Many are worried the bubble may be bursting. Morgan Stanley projects that the cumulative amount spent worldwide on data centers could exceed US$3 trillion by year-end 2028. China’s AI investment could hit 700 billion yuan (US$99 billion) this year, 48 per cent more than last year, according to Bank of America, with the government supplying US$56 billion.” There is a setting for both ‘AI investments are colliding with real-world obstacles’ and ‘worldwide on data centers could exceed US$3 trillion by year-end 2028’ the weird feeling I have that it will not get this far, this entire setting will implode before the end of 2027, investors will stop feeling lovingly towards the boom that is not coming and will start feeling pressured that the terms required that will grow erratic setting for the need for greed and that is the setting that comes along long before 2027 is reached. 

Then we get to AOL who gives us (at https://www.aol.com/finance/goldman-sachs-issues-warning-ai-103249744.html) where we are given ‘Goldman Sachs issues a warning to AI stock investors’ where we are given ““Our discussions with investors and recent equity performance reveal limited appetite for companies with potential AI-enabled revenues as investors grapple with whether AI is a threat or opportunity for many companies. While we expect the AI trade will eventually transition to Phase 3, investors will likely require evidence of a tangible impact on near-term earnings to embrace these stocks. Unlike Phase 2, there will likely be winners and losers within Phase 3,” Goldman Sachs US equity strategist Ryan Hammond wrote in a new note on Friday. Hammond thinks AI investment as a percentage of capital expenditures could be nearing a climax. In turn, that sets the stage for overly upbeat AI investors to be let down if earnings don’t come in strongly in future quarters.” As I see it, when we are given these settings everyone seems to get concerned, so when we get in addition “Salesforce (CRM) and Figma (FIG) got drilled on Thursday after their earnings reports didn’t wow. It’s clear that the hype on their earnings calls wasn’t enough to paper over soft areas of the earnings reports. Growing concern on the Street centers around the pace of AI demand by corporations, given what looks to be a slowing US economy.” As I stated this before, the need for greed overwhelmed everything. When the setting of NIP (Near Intelligent Parsing) is not clearly laid out and it is caught in the waves of board of directors and Investors believing that they have the AI solution everyone is looking for you gets a larger setting, consider that and consider what happens when OpenAI “fails to wow” the investors, or even a delay and it all comes to a large shutdown and that is even before we see 9 News giving us “A Sydney data centre that will host ChatGPT is being hailed as a win for Australia, but an expert warns the country lacks the energy supply needed to power it reliably” I gave a few months ago that there would be an energy problem on numerous levels and now we are seeing that whilst we are dealing with the the fallout of other settings. And less than an hour ago Deutsche Welle gives us ‘Google raises AI stakes as OpenAI struggles to stay on top’  with “Given those strengths, Adrian Cox sees “a very high probability” Google will have the leading model at least into next year — not OpenAI. OpenAI’s priority, he says, is identifying a business model capable of funding a user base that could soon approach a billion people per week.” This is not about OpenAI, I did that already, the larger frame is set in the perception of whatever the bubble is and I believe that there are two factors that the media doesn’t want or is avoiding to include. First there are the doom sayers trying to early burst confidence in favor of short gains and then there are people trying to short on whatever they can so that they can get another jolt of profit and they are all out trying to set social media on their side. 

So if this is the prologue of what is about to unfold we are in for a jolly good time, and as I see it, there is a chance that Christmas for some will be a disaster.

I wanted to include more of Peter sellers, like the Party or the Pink Panther but I am running out of juice. But there was one more thing and I got it from the Independent about an hour ago. It states ‘OpenAI rushes out new AI model in ‘code red’ response to fears about Google’ (at https://ca.news.yahoo.com/openai-rushes-ai-model-code-105822611.html) that was the snippet I was hoping for. With “The ChatGPT creator will unveil GPT-5.2 this week, The Verge reported, after OpenAI CEO Sam Altman declared a “code red” situation following the launch of Google Gemini 3 last month. Google’s latest AI model surpassed ChatGPT in several benchmark tests, including abstract and visual reasoning, as well as advanced knowledge across scientific disciplines.” But that comes in a setting, you see, I stated in ‘TBD CEO OpenAI’ two days ago (at https://lawlordtobe.com/2025/12/06/tbd-ceo-openai/) “in a software release any of a hundred things can go wrong and they all need to go right at present.” And when things are rushed out things will go wrong. But there is a snag, for this to happen The Independent article had to be correct and as they are the only one giving us this, there is no real verification available. But when you are in a stage when bubbles go boom (or plop) all the available facts become important. And I massively wish that a Peter sellers setting would help me out. And perhaps in view of this, his classic phrase “It’s no matter. When you’ve seen one Stradivarius, you’ve seen them all.” Especially when looking at NIP software. But that is also the snag. I have seen excellent applications and I have seen lesser ones. I reckon that it amounts to who plays the violin, if it is a creative person that person will find new life in whatever that person. applies NIP to, if it is a salesperson it will be about maximizing greed and that setting tends to have limitations on several degrees. In addition we are given “The new model was originally scheduled to launch in late December, but will now be released as early as 9 December.” I understand the pressures that come with this but they better understand that early launch bring dangers and investors don’t really like to be spooked (they also don’t like them) What we see is open to interpretation and it is a valid thought that my views are also open to interpretation. 

So in this I leave you all with a presenting view not unlike Peter sellers would say “To see me as a person on screen would be one of the dullest experiences you could ever wish to experience” and 

As you I have never been in a movie (at least I don’t remember being in one) you are spared that dull experience. So have a great day and don’t forget to love the bubble (if you haven’t invested your wealth there).

Leave a comment

Filed under Finance, IT, Media, Science

TBD CEO OpenAI 

That is the thought I had, yesterday, 5 hours after I wrote my piece, I still saw the news appear all over the media, some on it was getting a ridiculous amount of attention, so I decided to take another look at some of this. First there was the Business insider (at https://www.businessinsider.com/openai-code-red-chatgpt-advertising-google-search-gemini-2025-12) giving us ‘OpenAI’s Code Red: Protect the loop, delay the loot’ where we see “Focus on improving ChatGPT, and pause lower-priority initiatives. The most striking pause is advertising. Why delay such a lucrative opportunity at a moment when OpenAI’s finances face intense scrutiny? Because in tech, nothing matters more than users.” This was followed by “Every query and click fed a feedback loop: user behavior informed ranking systems, which improved results, which attracted more users. Over time, that loop became an impenetrable moat. Competing with it has proven nearly impossible.

ChatGPT occupies a similar position for AI assistants. Nearly a billion people now interact with it weekly, giving OpenAI an unmatched new window into human intent, curiosity, and decision-making. Each prompt and reply can be fed back into model training, evaluations, and reinforcement learning to strengthen what is arguably the world’s most powerful AI feedback loop.” All this makes sense, it comes with the nearly mandatory “Google’s Gemini 3 rollout has lured new users. If ChatGPT’s quality slips or feels cluttered, defecting to Google becomes easier. Introducing ads now risks exactly that. Even mildly irritated users could view ads as one annoyance too many.” Whilst in the background we are ‘sensitive’ to “OpenAI has already committed to spending hundreds of billions of dollars on infrastructure to serve ChatGPT at a global scale. At some point, those bills will force the company to monetize more aggressively.

If OpenAI manages to build even half of Google’s Search ads business in an AI-native form, it could generate roughly $50 billion in annual profit. That’s one way to fund its colossal ambitions.” This gives OpenAI a two sided blade in the back. It was a good ploy, but that ploy is deemed to be counter productive and I get that, but dropping the ads might sting with the investors as It was the dimes that they were seeing coming their way and ChatGPT needs to make a smooth entry all the way to the next update, which will be near impossible to avoid in several ways. Google has the inside track now and whilst there are a few settings that are ‘malleable’ for the users, the smooth look is essential for ChatGPT to continue. And that is before other start looking at the low quality data it verifies against. Google has, as I see it, exactly the same problem, but as I see it, ChatGPT gets it now in advance. 

Newcomer (at https://www.newcomer.co/p/openais-code-red-shows-the-power) gives us “In truth, as Newcomer’s Tom Dotan wrote back in April, Google, with all of its formidable assets, was never very far behind. Nor is it currently very far ahead. Anthropic too has always been essentially neck-and-neck with OpenAI on the core technology. The capabilities of the big foundation models, and even some lighter ones like DeepSeek, are broadly similar. Marc Benioff, himself a skilled practitioner in the arts of attention, even claimed this week that the big models will be interchangeable commodities, like disk drives. Yet the perception of who’s on top matters quite a lot at a moment when consumers, enterprise technology buyers, and investors are all deciding where to place some highly consequential long-term bets. That brings us back to Altman’s “Code Red.”” Is a truth in itself, but the next part “while the alarm came in a company-wide memo that wasn’t officially announced publicly, we can stipulate that the “leak” of the memo, if not necessarily orchestrated, was almost certainly part of the plan. A media maestro like Altman surely knew that a memo going out to thousands of employees with charged language like “Code Red” was all but guaranteed to make its way to the press. Publicizing a panicked internal reaction to a competitor’s new product might seem like a counter-intuitive way to maintain your reputation as the industry leader.” As I see it, someone in Microsoft marketing earned his dollars in marketing that day, but this is a personal feeling, I have no data to back it up. It is now up to Sam Altman to deliver his ‘new’ version in the coming week and it better the a great new release, or as I see it, there will be heads rolling all over the floor and Sam Altman knows that the pressure is up. I don’t think he is scared as some media says, but he is definitely worried, because this setting will set the record of $13 billion straight, into or away from Microsoft and Sam Altman knows this, as such he is probably a little worried and in a software release any of a hundred things can go wrong and they all need to go right at present. 

Then we get “Altman and OpenAI are so good at making news that it’s sometimes hard to tell what’s real.” So, isn’t that the setting all the time? I have always seen Sam Altman as a bad second hands car salesman, That is my take, but I have had a healthy disgust for salespeople for over 30 years. I am a service person, Technical support, customer support. That was always my field. I am not against sales, merely against cleaning up their messes. At times this comes with the territory, shit happens, but those salespeople overselling something just so that they can fill their pipeline and make their numbers are not acceptable to me. To illustrate this, A little setting (devoid of names and brands) “A salesperson came to me with what he needed. We could not do that and I told him, so off he goes calling every technical support person on the planet until he found one that agreed with him and then he sold the solution to the customer and hung that persona name on this. I had to clean up the mess and set up a credit invoice, but after I went through the whole 9 yards making it over 30 days ensuring him that he kept his commission” that is the type I am disgusted with because the brands as a whole suffers, all for the need of greed. It is short sighted thinking. I goes nowhere, but his monthly revenue was guaranteed. And I feel that Sam Altman is not completely like that, but it is the ‘offset’ of salespeople that I carry within me. For me protecting the product and the customer are first and foremost on my mind. 

Then we get Futurism (at https://futurism.com/artificial-intelligence/openai-is-suddenly-in-major-trouble) where we see ‘OpenAI Is Suddenly in Major Trouble’ OK, is this true? We are given “The financial stakes are almost comical in their magnitude: The company is lighting billions of dollars on fire, with no end in sight; it’s committed to spending well over $1 trillion over the next several years while simultaneously losing a staggering sum each quarter. And revenues are lagging far behind, with the vast majority of ChatGPT users balking at the idea of paying for a subscription.” I don’t agree with this setting. You either pay, or you see advertisement that is the setting. There are no free rides and the sooner you realise this, the easier this gets. Then we are given “Meanwhile, Google has made major strides, quickly catching up with OpenAI’s claimed 800 million or so weekly active ChatGPT users as of September. Worse yet, Google is far better positioned to turn generative AI into a viable business — all while minting a comfortable $30 billion in profit each quarter, as the Washington Post points out.” I agree with the setting the Washington Post sets out with and Google does have an advantage, but that is still relying on the fact that Sam Altman does not get his new version seen as stellar in the coming week. He still has a much larger issue, but that is for later. All this comes at the price of being in the frontrunner team. Easy does it, there is no other way and the stakes are set rather high. So then we are given “In a Thursday note, Deutsche Bank analyst Jim Reid estimated staggering losses for OpenAI amounting to $140 billion between 2024 and 2029.” This is probably true, but where are the numbers. $140 billion over 5 years is one, but what revenue is set against it? Because if this is still set against a revenue number that OpenAI keeps making they are going decently sweet, the numbers were never in debate, the return on investment was and these stakes are high and there is no debating that, these numbers are either given or they are not. 

Then we are given something that makes sense ““OpenAI may continue to attract significant funding and could ultimately develop products that generate substantial profits and revolutionize the world,” he wrote, as quoted by WaPo. “But at present, no start-up in history has operated with expected losses on anything approaching this scale.” “We are firmly in uncharted territory,” Reid added.” I agree, in several ways, but the revenue is not given as such the real deal is absent. Consider YouTube, did anyone see the upside of a $1.65 billion acquisition 20 years ago? It now generates $36.1 billion in annual revenue (2024), Microsoft and OpenAI are banking on that same setting and Microsoft needs it to get a quality replacement for Clippy and they are banking on ChatGPT, this will only happen if they win over Google and I have my doubts on this. There is no real evidence because the new version isn’t ready yet, but it really needs one hitch to make it all burn down and Altman knows this. The numbers or better, the statistics are not on his side. And as I haven’t see a decent software price fight for a while, so I am keeping my thumbs up for Altman (I am however a through and through Google guy). This is a worthy fight watching and I am wondering how this might evolves over the next week.

The stakes are high, the challenge is high, lets see if Sam Altman rises to the occasion. It’s almost Sunday for me so have a great day you all, I reckon that Ryan Reynolds is about 6 hours from breakfast in Vancouver now.

1 Comment

Filed under Finance, IT, Media, Science

The rockstar wannabe

There is a setting we at times ignore. When so called ‘important’ people hide behind movie settings like Sam Altman is when he calls for ‘Code Red’ (at https://www.theguardian.com/technology/2025/dec/02/sam-altman-issues-code-red-at-openai-as-chatgpt-contends-with-rivals) I tend to get frisky and a little stir crazy, but as we see the Guardian, we are given “According to a report by tech news site the Information, the chief executive of the San Francisco-based startup told staff in an internal memo: “We are at a critical time for ChatGPT.”

OpenAI has been rattled by the success of Google’s latest AI model, Gemini 3, and is devoting more internal resources to improving ChatGPT. Last month, Altman told employees that the launch of Gemini 3, which has outperformed rivals on various benchmarks, could create “temporary economic headwinds” for the company. He added: “I expect the vibes out there to be rough for a bit.”” So after all the presentations and the posturing by OpenAI’s CEO Sam Altman, we are now confronted that the CEO of Google, Sundar Pichai smirking and devouring a Beef Vindaloo with naan bread casually passed Sam Altman by and overtook his setting of ChatGPT with Gemini 3. 

We are given “Marc Benioff, the chief executive of the $220bn (£166bn) software group Salesforce, wrote last month that he had switched allegiance to Gemini 3 and was “not going back” after trying Google’s latest AI release. “I’ve used ChatGPT every day for 3 years. Just spent 2 hours on Gemini 3. I’m not going back. The leap is insane – reasoning, speed, images, video … everything is sharper and faster. It feels like the world just changed, again,” he wrote on X.” And if a BI guy like Marc Benioff makes that jump, a lot of others will do too and that is what is truly frightening to Microsoft who owns a little below 30% of all this, it is nice to have a DML solution that has a population of zero, OK, not zero but ridiculously small because as ever (and not surprising) Google is showing his brilliance and overtook the wannabe.

So whilst Sam Altman decided that he was the next Elon Musk we see (at https://gizmodo.com/sam-altman-wants-his-own-rocket-company-2000695680) that ‘Sam Altman Wants His Own Rocket Company’ and we see here “Altman was reportedly considering investing billions into Stoke Space, a Seattle-based startup that’s developing a reusable rocket, to gain a controlling stake in the company, according to The Wall Street Journal. The talks between Altman and Stoke took place over the summer and picked up in the fall. Although no deal has been made yet, Altman intended on either buying or partnering with a rocket company so that he would be able to deploy AI data centers to space.” So whilst Sammy the Oldman, sorry Sam Altman was turning his focus towards space Sundar Pichai surpassed him in the DML field because Sundar, beside his need for Beef Vindaloo was seemingly focussed on the Data matters of Google, allegedly not with his head in space.

And now we see (at https://futurism.com/artificial-intelligence/sam-altman-code-red) that ‘Sam Altman Is Suddenly Terrified’ and now we are given “The all-out brawl that followed in the subsequent years, with AI companies trying to outdo each other with their own offerings as investors threw tens of billions of dollars at the tech, has shifted the dynamics considerably.

And now, the tables have officially turned: OpenAI CEO Sam Altman has declared his own “code red” in a memo to employees this week, as the Wall Street Journal reports, urging staffers to improve the quality of the company’s blockbuster chatbot, even at the cost of delaying other projects.” So as I see it, Sam Altman was ready to be the next rockstar of Microsoft surpassing all others, but Google (say Sundar Pichai) had been sitting on a throne for the better part of two decades, they had relented the Console war (their Google Stadia) towards Amazon with the Amazon Luna. And that might have been a sore loss. So when another ‘upstart’ comes with a great idea, Google recounts and Gemini was the result, or that is at least how I see it. And by the time version three was ready, Gemini was back in the lead or so they say.

So now Sam Altman is in a bind, he needs to evolve ChatGPT and that might have been be in what some call a pickle, so whilst Sam Altman was looking at the sky, Google took the time to overtake Sam Altman with Gemini 3. And now the storm has reached the shores of the financial industry. Now Microsoft is in a pickle, because the OpenAI is now due to the investment marked the start of a partnership between the cloud computing firm and the AI research company that has since grown to more than US$13bn in total commitments. Microsoft and OpenAI are bound to ChatGPT to the nihilistic setting of these firms losing 13 billion in value, so when that happens, what more will unfold? I am not stating that this will burst the AI bubble, but as I see it Sam Altman will see his halo decrease looking a lot like a zero, and Microsoft sees the tally of failures increase to two, first builder.ai, now we see that Microsoft is surpassed again by Google, which is not a great surprise to me. 

And as Futurism gives us “Google, though, has a major financial advantage by already being profitable. It can afford to spend aggressively on data centers, at least for the time being. That’s besides Google Search having been the de facto search engine on the internet for decades, giving it access to a vast number of existing users who could be swayed by its AI offerings.

Altman claimed in the memo that the company has an ace up its sleeve in the form of an even more powerful reasoning model that’s set to be released as early as next week, according to the WSJ, likely a direct response to Google’s Gemini 3.” So is this a simple setting of a little time gap, or is OpenAI now in more trouble than anyone think it is? I actually do not know, but there is a setting that I personally like. I was always Google minded. I was struck in my soul when they dropped the Google Stadia as I had a plan to give it 50,000,000 subscriptions in stage one and rally add to that beyond that, knocking Microsoft of its illusionary perch. But alas, it was not to be and Amazon had the inside track from that point inwards. And I personally feel that the stage of “to be released as early as next week” is likely want-to-be-real presentation, Sam Altman is trying to get any moment he can get and that is fine, but as I see it, it might be timing and people like Sam Altman will try to get any way to keep their cushy setting. I am not judging, but the stage that Gemini 3 is surpassed is likely, will it be? I doubt it, using the words from Marc Benioff stating “not going back” and that is a powerful setting, one that creeps fear into the hearts of Sam Altman and Satya Nadella as I personally see it.

Have a great day, my weekend has begun and Vancouver will join us in 15 hours.

Leave a comment

Filed under Finance, IT, Science

When politicians become delusional

That is what I saw two days ago when the BBC gave us (at https://www.bbc.com/news/articles/cq8dq47j5y8o) ‘South Africa hits back after Trump says US won’t invite it for G20 next year’ the article gives us the setting “South Africa’s President Cyril Ramaphosa has described as “regrettable” the announcement by US President Donald Trump that South Africa would not be invited to take part in next year’s G20 summit in Florida. In a social media post, Trump said South Africa had refused to hand over the G20 presidency to a US embassy representative at last week’s summit in Johannesburg.” As well as “Ramaphosa said in a statement that the US had been expected to participate in the G20 meetings, “but unfortunately, it elected not to attend the G20 Leaders Summit in Johannesburg out of its own volition”. He however noted that some US businesses and civil society entities were present. He said that since the US delegation was not there, “instruments of the G20 Presidency were duly handed over to a US Embassy official at the Headquarters of South Africa’s Department of International Relations and Cooperation”.” There is as I personally see as I see it a second reason. Is the reason perhaps that America is in such a disastrous financial situation that he felt compelled to evade the G20? He can approach the entire setting to the press with ‘Quiet piggy’ settings, but the 15 strongest economies can not be answered in that same manners. There he has to answer and his department of War and the house of missing coins can’t shield him from that. This year Canada took home the beef, the champagne and the bacon. Next year? That is something he is unwilling to face at present. He needs to be reinsured that all the trillions that are changing between hands over 7 companies will do him good and at present the setting of Stargate is currently set at a economic windfall of minus 500 billion and that was not what he advertised a year ago and it is merely one of several failures. And at present these 7 big bloated companies are at best bringing in 3% of what is required (an inaccurate presumption) but that setting is what he is looking at and at present there is no upside to the numbers of 2027 and 2028. 

The image above was shown in LinkedIn, I never thought of it this way, where we see “The entire U.S. economy right now is seven companies sending one trillion back and forth to each other” that is how it could be seen (credit of image unknown) but is that GDP revenue? I reckon that some might validly disagree and that is before you consider what OpenAI is costing America and Microsoft (at 3% revenue it isn’t really an asset is it?)

And beyond that tourism is falling flat, and America is representing itself to be nothing more than a third world country, the president of the United States is likely to be marginally better than South Africa or Argentina, making it 17th place at best. The GDP setting in December 2024 (which was 29185) will be seen as a jolly time, by next year America is likely (a clear speculation) to be less than 13913 making it a little more fortunate than India which manages this at 5 times the population. Would you gathers in that crowd after you proclaimed year after year that America was doing so well? The defense industry is losing revenue, tourism is down massively and that Oxford Economics report stating that it is costing America $50 billion, which is 400% worse than the numbers we see thrown in the media. Then jobs are down and as I see it retail is massively down. in addition we see Aluminum smelters are down, only 4 in 24 are operating. They cannot deal with the unsustainable operating cost and that list goes on. So what happens when soda cans become an issue? American dream states are set to operate a soda can, opening it and drinking it (in the Miami sun), so I reckon that 2026 will bring its own entertainment to behold and at present , I reckon that President Trump is merely showing up to do some photo moments, so who will be ‘advocating’ how well America is doing?

I reckon it sucks to be the the man in charge at the Federal Reserve. And only 8 hours we were given “Federal Reserve has managed to push up bank reserves for 4 weeks now, but they’re running out of tools in the toolbox and will soon have to resume asset purchases, euphemistically called “QE” for quantitative easing, i.e., money printing:” (source: E.J. Antoni, Ph.D.) so as we accept that Jerome Powell is (for now) the Chair of the Federal Reserve of the United States. I cannot recall that America has given any voice to the effects (or benefits) of Quantitive Easing. So is it real? What is Jerome Powell up to? It is a fair question as President Trump doesn’t really understand economics, optionally even less than me. As I see it, he filed for bankruptcy 6 times, the last time was due to the 2008 mess, so if people argue 5 times I would accept that. As I see it, he needed to make Jerome Powell his best friend and seek his assistance in avoiding the setting America is facing these days. And my smirking sense of humor (an evil one) is wondering if America can even afford hosting the 2026 G20 summit. As I see it (and I might definitely be wrong) is that America is using South Africa to get the 2026 setting taken away from them. As I see it, Canada or the EU is a much better place in 2026. There might be a reason to hope for Canada, as he will see it as a reason to make the speculative statement that he is leaving the G20 to his 51st state (making Canadians angry to say the least). 

But as I see it, I actually don’t know. And I reckon that most DML systems cannot either as this setting has never taken place before, the American economy is in an mess and not a good one.

This is what you call the perfect setting to be hosting the G20 in 2026, apparently in Miami, so order your sodas in advance. 

Is there more bad news, is countered by me with ‘Does there need to be?’ A setting that is voiced by many. As I see it, the GDP in 2023 The gross domestic product (GDP) for the Los Angeles metro area was approximately $1.30 trillion in 2023, now we know that Los Angeles had dreadful fires, but the current situation isn’t helping and what will California report in revenue for 2024 and 2025? We will know some of these numbers in December, giving a lot more visibility to the hardship America is facing and there is no hiding from those numbers (playing them will be worse). America is stopping to be a great place to be and as I see it, there aren’t too many countries lining up to be their friend at present. Trump squashed that route of healing too.

Have a great day, I am almost late for breakfast.

Leave a comment

Filed under Finance, IT, Media, Politics

And Grok ploughed on

That happens, but after yesterdays blog ‘The sound of war hammers’ (at https://lawlordtobe.com/2025/11/27/the-sound-of-war-hammers/) I got a little surprise. I could not have I want to planned it better.

You see, the article is about the AI bubble and a few other settings. So at times, I want Grok to take a look. No matter what you think, it tends to be a decent solution in DML and I reckon that Elon Musk with his 500,000 million (sounds more impressive then $500B) has sunk a pretty penny in this solution. I have seen a few shortcomings, but overall a decent solution. As I personally see it (for as far as I have seen it) that solution has a problem looking into and through multidimensional viewpoints. That is how I usually take my writing as I am overwhelmed at times with the amount of documentation I go through on a daily basis. As such I got a nice surprise yesterday.

So the story goes of with war hammers (a hidden stage there) then I go into the NPR article and I end up with the stage of tourism (the cost as the Oxford Economics report gives us) and I am still digging into that. But what does Grok give me?

The expert mode gives us:

Now, in the article I never mentioned FIFA, the 2026 World Cup or Saudi Arabia, so how did this program come to this? Check out the blog, none of those elements were mentioned there. As some tell us Grok is a generative artificial intelligence (generative AI) chatbot developed by xAI. So where is that AI program now? This is why I made mention in previous blogs that 2026 will be the year that the class actions will start. In my case, I do not care and my blog is not that important, even if it was, it was meant for actual readers (the flesh and blood kind) and that does not apply to Grok. I have seen a few other issues, but this yesterday and in light of the AI bubble story yesterday (17 hours ago) pushed this to the forefront. I could take ‘offense’ to the “self-styled “Law Lord to be”” but whatever and I have been accused of a lot worse by actual people too. And the quote “this speculation to an unusual metaphor of “war hammers”” shows that Grok didn’t see through my ruse either (making me somewhat proud), which is ego caressing at best, but I have an ego, I merely don’t let it out to often (it tends to get a little too frisky with details) and at present I see an idea that both the UAE and Saudi Arabia could use in their entertainment. There is an upgrade for Trojena (as I see it), there are a few settings for the Abu Dhabi Marina as well. All in a days work, but I need to content with data to see how that goes. And I tend to take my ideas into a sifter to get the best materials as fine as possible, but that was today, so there will be more coming soon enough. 

But what do you do when an AI system bleeds information from other sources? Especially when that data is not validated or verified and both seem to be the case here. As I see it, there is every chance that some will direct these AI systems to give the wrong data so that these people can start class actions. I reckon that not too many people are considering this setting, especially those in harms way. And that is the setting that 2026 is likely to bring. And as I see it, there will be too many law firm of the ambulance chaser kind to ignore this setting. That is the effect that 8 figure class actions tend to bring and with the 8 figure number I am being optimistic. When I see what is possible there is every chance that any player in this field is looking at 9 or even 10 figure settlements, especially when it concerns medical data. And no matter what steps these firms make, there will be an ambulance chaser who sees a hidden opportunity. Even if there is a second tier option where a Cyber attack can launch the data into a turmoil, those legal minds will make a new setting where those AI firms never considered the implications that it could happen.

I am not being dramatic or overly doom speaking. I have seen enough greed all around me to see that this will happen. A mere three months ago we saw “The “Commonwealth Bank AI lawsuit” refers to a dispute where the Finance Sector Union (FSU) challenged CBA for misleading staff about job cuts related to an AI chatbot implementation. The bank initially made 45 call centre workers redundant but later reversed the decision, calling it a mistake after the union raised concerns at the Fair Work Commission. The case highlighted issues of transparency, worker support, and the handling of job displacement due to AI.” So at that point, how dangerous is the setting that any AI is trusted to any degree? And that is before some board of directors sets the term that these AI investments better pay off and that will cause people to do silly (read: stupid) things. A setting that is likely to happen as soon as next year. 

And at this time, Grok is merely ploughing on and set the stage where someone will trust it to make life changing changes to their firm, or data and even if it is not Grok, there is all the chances that OpenAI will do that and that puts Microsoft in a peculiar stage of vulnerable.

Have a great day, time for some ice cream, it was 33 degrees today, so my living room is hot as hell, as such ice cream is my next stage of cooling myself.

1 Comment

Filed under Finance, IT, Media, Science

The sound of war hammers

It is a specific sound, nothing compares to that and it isn’t entirely fictional. Some might remember the Walter Hill movie Streets of Fire (1984) where two men slug it out with hammers, but that is not it. When a Warhammer slams into metal armor, the armor becomes a drum and that sound is heard all over the battlefield (the wearer of that armour hears a lot more than that sound) but is distinct and I reckon that some of those hammer wielders would have created some kind of crescendo on these knights. So that was ‘ringing’ in my ears when NPR gave us ‘Here’s why concerns about an AI bubble are bigger than ever’ a few days ago (at https://www.npr.org/2025/11/23/nx-s1-5615410/ai-bubble-nvidia-openai-revenue-bust-data-centers) and what will you know. They made the same mistake, but we’ll get to that.

The article reads quite nicely and Bobby Allyn did a good job (beside the one miss) but lets get to the starting blocks. It starts with “A frothy time for Huang, to be sure, which makes it all the more understandable why his first statement to investors on a recent earnings call was an attempt to deflate bubble fears. “There’s been a lot of talk about an AI bubble,” he told shareholders. “From our vantage point, we see something very different.”” So then we get three different names all giving ‘their’ point of view with ““The idea that we’re going to have a demand problem five years from now, to me, seems quite absurd,” said prominent Silicon Valley investor Ben Horowitz, adding: “if you look at demand and supply and what’s going on and multiples against growth, it doesn’t look like a bubble at all to me.” Appearing on CNBC, JPMorgan Chase executive Mary Callahan Erdoes said calling the amount of money rushing into AI right now a bubble is “a crazy concept,” declaring that “we are on the precipice of a major, major revolution in a way that companies operate.” Yet a look under the hood of what’s really going on right now in the AI industry is enough to deliver serious doubt, said Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy.” All three names give a nice ‘presentation’ to appease the rumblings within an investor setting. Ben Horowitz, Mary Callahan Erdoes and Paul Kedrosky are seemingly mindset on raking in whatever they can and then the fourth shines a light on this (not in the way he intended) we see “Take OpenAI, the ChatGPT maker that set off the AI race in late 2022. Its CEO Sam Altman has said the company is making $20 billion in revenue a year, and it plans to spend $1.4 trillion on data centers over the next eight years. That growth, of course, would rely on ever-ballooning sales from more and more people and businesses purchasing its AI services.” Did you see the setting. He is making 20 billion and investing $1.4 trillion, now that represents a larger slice and the 20 billion is likely to make more (perhaps even 100 billion a year. And now the sides of hammers are slamming into armour. That still will take 14 years to break even and does anyone have any idea how long 14 years is and I reckon that $1.4 trillion represents (at 4.5%) implies that the interest is $63,000,000,000. That is almost the a year of revenue and that is the hopefully glare if he is making 100 billion a year. So what gives with this, because at some point investors make the setting that the formula is off. There is no tax deductibility. That is money that is due, the banks will get their dividend and whomever thinks that all this goes at zero percent is ludicrously asleep and that is before the missing element comes out. 

So then in comes Daron Acemoglu with “A growing body of research indicates most firms are not seeing chatbots affect their bottom lines, and just 3% of people pay for AI, according to one analysis. “These models are being hyped up, and we’re investing more than we should,” said Daron Acemoglu, an economist at MIT, who was awarded the 2024 Nobel Memorial Prize in Economic Sciences.” He comes at this from another angle and gives us that we are investing more than we should. All these firms are seeing the pot at the end of the rainbow, but there is the hidden snag, we learned early in life that the rainbow is the result of sunlight on rainwater and it is always curves t be ‘just’ beyond the horizon and it never hits the ground and there will be no pot of gold at the end of it according to Lucky the Leprechaun (I have his fax number) but that was not the side I am aiming for, but it gives the idiocy we see at present. They are all investing too much into something that does not yet exist, but that is beside the point. There are massive options for DML and LLM solutions, but do you think that this is worth trillions? It follows when we get to “Nonetheless, Amazon, Google, Meta and Microsoft are set to collectively sink around $400 billion on AI this year, mostly for funding data centers. Some of the companies are set to devote about 50% of their current cash flow to data center construction.

Or to put it another way: every iPhone user on earth would have to pay more than $250 to pay for that amount of spending. “That’s not going to happen,” Kedrosky said.” This comes from Paul Kedrosky, a venture capitalist who is now a research fellow at MIT’s Institute for the Digital Economy, and he is right. But that too is not the angle I am going for. But there are two voices, both in their field of vision, something they know and they are seeing the edges of what cannot be contained, one even got a Nobel Memorial Prize for his efforts (past accomplishment) And I reckon all these howling bitches want their government to ‘safe’ them when the bough breaks on these waves. So Andy Jassy, Sundar Pichai, Mark Zuckerberg and Satya Nadella (Amazon, Google, Meta and Microsoft) will expect the tax system to bail them out and there is no real danger to them, they might get fired but they’ll survive this. Andy Jassy is as far as I know the poorest of the lot and he has 500 million, so he will survive in whatever place he has. But that is the danger. The investors and the taxpayers (you and me) get to suffer from this greed filled frenzy. 

But then we get “Analyst Gil Luria of the D.A. Davidson investment firm, who has been tracking Big Tech’s data center boom, said some of the financial maneuvers Silicon Valley is making are structured to keep the appearance of debt off of balance sheets, using what’s known as “special purpose vehicles.””, as well as “The tech firm makes an investment in the data center, outside investors put up most of the cash, then the special purpose vehicle borrows money to buy the chips that are inside the data centers. The tech company gets the benefit of the increased computing capacity but it doesn’t weigh down the company’s balance sheet with debt.” And here we get another failure. It is the failure of the current administration that does not adapt the tax laws to shore up whatever they have for whatever no one has and that is the larger stakeholder in this. We get this in an example in the article stating “Blue Owl Capital and Meta for a data center in Louisiana”, this is only part of the equation. You see, they are ’spreading the love’ around because that is the ‘safe’ setting and they know what comes next. You see the Verge gave us ‘Nvidia says some AI GPUs are ‘sold out,’ grows data center business by $10B in just three months’ (at https://www.theverge.com/tech/824111/nvidia-q3-2026-earnings-data-center-revenue) and that is the first part of the equation. What do you think will power all this? That is the angle I am holding onto. All these data centers will need energy and they will take it away from the people like you and me. And only 4 hours ago we see ‘Nvidia plays down Google chip threat concerns’ and it is all about the AI race, which is as I said non-existent, but the energy required to field these hundreds of thousands of GPU’s is and no one is making a table of what is required to fuel these data centers because it is not on ‘their plate’ but the need for energy becomes real and really soon too. We do not have the surplus to take care of this and when places like Texas give us “Electricity demand is also going up, with much of it concentrated in Texas due to “data centers and cryptocurrency mining facilities,”” with the added “Driving the rise in wholesale prices next year is primarily a projected 45% increase at the Electric Reliability Council of Texas-North pricing hub. “Natural gas prices tend to be the biggest determinant of power prices,” the EIA said. “But in 2026, the increase in power prices in ERCOT tends to reflect large hourly spikes in the summer months due to high demand combined with relatively low supply in this region.”” Now this is not true for the whole world, but we see here a “projected 45% increase” and that is for 2026. So where are these data centers, what are their energy surpluses and what is to come? No one is looking at that, but when any data centre is hit with a brownout, or a partial and temporary drop in voltage in an electrical power supply. When that happens any data centre shuts down, energy is adamant for all its GPU’s and their better not we any issue with energy and I saw this a year ago, so why isn’t the media looking into this? I saw one article that that question was not answered and the media just shoved it aside, but as I see it, it should be on the forefront of any media setting. It will happen and the people will suffer, but as I see it (and mentioned) is that the media is whoring for digital dollars and they need their advertisement money from these 4 places and a few more, all ready for advertisement attention and the media plays ball because they want their digital dollars (as I personally see it).

So whilst the NPR article is quite nice, the one element missing is what makes this bubble rear its ugly head, because too many want their coins for their effort and it is what is required. But what does the audience require? And the audience is you an me dear reader. I have set a lot of my requirements to energy falling short, but there is only so much I can do and it is going to be 32 degrees (celsius) today, so what happens when the energy slows down for 5.56 million people in Sydney? Because the Data centers will make a first demand from their energy providers or they will slap a lawsuit worth billions on that energy provider. And we the people (wherever we are) are facing what comes next. Keeping data centers cool and powered whilst we the people boil in our own homes. As such that is the future I am predicting and people think I am wrong, but did they make the calculation of what these data centers require? Are they seeing the energy shortfalls that are impeding these data centers? And the energy providers will take the money and the contracts because it won’t coexist to this, but that is exactly what we are facing in the short run and the investors? Well, I don’t really care about them, they invested and if you aren’t willing to lose it all with a mere card to help you through (card below), you aren’t a real investor, you are merely playing it safe and in that world there are no bubbles.

Remind me, how did that end in 2008? The speculated cost were set to $16 trillion in U.S. household wealth, and this bubble is significantly larger than the 2008 one and this time they are going all in on money, most of them do not have. So that is what is coming and my fears do not matter, but the setting that NPR gives us all with ‘Here’s why concerns about an AI bubble are bigger than ever’ matters and that is what I see coming.

So have a great day and never trust one source, always verify what you read through other sources. That part was shown to be when we all see (from various sources) that “The United States is on track to lose $12.5 billion in international travel spending this year” whilst my calculations made it between 80 and 130 billion and some laughed at my predictions a few months earlier and I get that. I would laugh too when those ‘economics’ state one amount and I come with a number over 700% larger. I get that, but now (apparently) there is an Oxford economics report that gives us “Damning report says U.S. tourism faces $64 billion blow as Trump administration’s trade wars drive away foreign visitors and cut spending”, so I have that to chase down now, but it shows that my numbers were mostly spot on, at least a lot better than whatever those economics are giving you. So never trust merely one source even if they believe to be on the right track. But that is enough about that and consider why some bubble settings are underexposed and when you see that the NPR gave you three additional angles and missed mine (likely not intentional) consider what those investment firms are overseeing (likely intentional) because the setting that they are willing to lose 100% is ludicrous, they have settings for that and as the government bailed them out the last time, they think it will save them this time too.

Have a great day today, I need an ice cream at 4:30 in the morning. I still have some, so yay me.

1 Comment

Filed under Finance, IT, Media, Politics, Science, Tourism

I lost my marbles

Like Poodles, I seem to have misplaced my marbles. AKA I lost them completely. Now only 9 hours ago I shouted that I am sick of the AI bubble, but a few minutes ago I got called back into that fray. You see, I was woken up by an image.

This is the image and it gives us ‘Oracle’s $300bn OpenAI deal is now valued at minus $74bn’ there is no way this is happening. You see, I have clearly stated that the bubble is coming. But in this, Oracle has a set state of technologies it is contributing. As such, where is the bubble blowing up in the face of OpenAI and Microsoft? In this, the Financial Times (at https://www.ft.com/content/064bbca0-1cb2-45ab-85f4-25fdfc318d89) is giving us ‘Oracle is already underwater on its ‘astonishing’ $300bn OpenAI deal’. So where is the damager to the other two? We are given “OK, yes, it’s a gross simplification to just look at market cap. But equivalents to Oracle shares are little changed over the same period (Nasdaq Composite, Microsoft, Dow Jones US Software Index), so the $60bn loss figure is not entirely wrong. Oracle’s “astonishing quarter” really has cost it nearly as much as one General Motors, or two Kraft Heinz. Investor unease stems from Big Red betting a debt-financed data farm on OpenAI, as MainFT reported last week. We’ve nothing much to add to that report other than the below charts showing how much Oracle has, in effect, become OpenAI’s US public market proxy:” There might be some loss on Oracle (if that happens) and later on we were given (after a stack of graphics, see the story for that) “But Oracle is not the only laggard. Broadcom and Amazon are both down following OpenAI deal news, while Nvidia’s barely changed since its investment agreement in September. Without a share price lift, what’s the point? A combined trillion dollars of AI capex might look like commitment, but investment fashions are fickle.” And in this, I still have doubts on the reporting side of things. From my own feelings (not hard core numbers) that Oracle and Amazon are the best players to survive this as their technology is solid. When AI does come, they are likely the only two to set it right and the entire article goes out of its way to mention Microsoft. But in all this Microsoft has made significant investments in OpenAI and has rights to OpenAI’s Intellectual Property (IP). This comes down to Microsoft holding a stake in OpenAI’s for-profit arm, OpenAI Group PBC, valued at approximately $135 billion, which represents about 27% of the company. So how is Microsoft not mentioned? 

As such how come Oracle is underwater? Is it testing scuba gear? And if the article is indeed true, what is the value of OpenAI now? Because that will also drown the 27% of it (holding the name Microsoft) and that image is missing from that equation. If this is the bubble bursting, which might be true (a year before I predicted it) then it stands to rights that this is also impacting Amazon, Google, IBM, Microsoft and OpenAI. As such this article seems a little far fetched, a little immature and largely premature by now naming all the players in this game. I personally thought that Oracle would be one of the winners in all of this, or better stated a smallest loser in this multi trillion bubble.

So what gives?
And in this I might be incorrect and largely missing the point, but a write-off to the amount of nearly half a trillion dollars has more underwriters and mentioning merely Oracle is a little far fetched, no matter how fashionable they all seem to be and for that matter as Microsoft has been ‘advocating’ their copilot program, how deep are they in? Because the Oracle write-off will be squarely in the face of that Nadella dude. As he seemingly already missed the builder.ai setting, this might be the one ending his career and whatever comes next might want to commit suicide instead of accepting whatever promotion is coming his way. (I know it is a dark setting) but the image is a little disconcerting at present. And the images that the Financial Times give us, like the Hyperscaler capex, show Microsoft to be 3 times in deeper water than Oracle is, so why aren’t they mentioned in the text? And in those same images Amazon are in way over their heads and that is merely the beginning of a bubble going sideways on everyone. As such, is this a storm in a cup of water? If that is so, why is Oracle underwater? And there is ample reason to see me as a non-economist, I never was on wanted to be one. But the media as gives raises questions. And I agree, Oracle is on a long way to break even, but if they do not, neither are Amazon, Microsoft and OpenAi and that part is seemingly missing too. If anything, Larry Ellison could pay the shortcomings with his petty cash (he allegedly has 250,000 million) that is how own die and the others won’t even come near that amount. 

So whilst we wait for someone to make sense of this all, we need to walk carefully and not panic, because these settings tend to be the stage where the panicky people sell what they can for dimes to the dollar and that is not how I want to see players like Microsoft jump that shark. This is not any kind of anti-Microsoft deal, it is them calling the others not innovative whilst there isn’t a innovative bone in that cadaver. So whilst we want to call the cards. The only thing I do is calling the cards of the Financial Times and likewise reporting media calling out the missing settings of loss towards Microsoft and OpenAI. It is the best I can do, I know an economic major who could easily do that, but he is busy running Canada at the moment.

Have a great day and I apologize for causing an optional panic, which was not my intention.

Leave a comment

Filed under Finance, IT, Science

Is it one or the other?

That is the question I had today/this morning. You see, I saw a few things happen/unfold and it made me think on several other settings. To get there, let me take you through the settings I already knew. The first cog in this machine is American tourism. The ‘setting’ is that THEY (whoever they are) expect a $12.5 billion loss. The data from a few sources already give a multitude of that, the airports, the BNB industry and several other retail settings. Some give others the losses of 12 airports which goes far beyond the $12.5 billion and as I saw it that part is a mere $30-$45 billion, its hard to be more precise when you do not have access to the raw numbers. But in a chain trend Airfares, visas, BNB/hotels, snacks/diversities, staff incomes I got to $80-$135 billion and I think that I was being kind to the situation as I took merely the most conservative numbers, as such the damage could be decently more. 

This is merely the first cog. Second is the Canadian setting of fighters. They have set their minds on the Saab Gripen s such I thought they came for

Silly me, Gripen means Griffin and a Hogwarts professor was eager to assist me in this matter, it was apparently 

Although I have no idea how it can hide that proud flag in the clouds. What does matter that it comes with “SAAB President and CEO Micael Johansson told CTV News that the offer is on the table and Ottawa might see a boost in economic development with the added positions. The deal could be more than just parts and components; Canada may even get the go-ahead to assemble the entire Gripen on its soil.” (Initial source: CTV news) this brings close to 10,000 jobs (which was given by another source) but what non-Canadian people ‘ignore’ is that this will cost the American defense industry billions and when these puppies (that what they call little Griffins) are built in Canada, more orders will follow costing the American Defense industry a lot more. So whilst some sources say that “American tourism is predicted to start a full recovery in 2029” I think that they are overly confident that the mess this administration is making is solved by then. I think that with Vision 2030 and a few others, recovery is unlikely before 2032. And when you consider The news (at https://www.thetravel.com/fifa-world-cup-2026-usa-tourist-visa-integrity-fee-100-day-wait-time-warning-us-consul-general/) by Travel dot com, giving us ‘FIFA World Cup 2026 Travelers Warned Of $435 Fee And 100-Day Delay By U.S. Consul General’ that there is every chance that FIFA will pull the 2026 setting from America and it is my speculation that Yalla Vamos 2030 might be hosting the 2026 and leave 2030 to whomever comes next, which is Saudi Arabia, the initial thought is that they might not be ready at that time, but that is mere speculation from me and there is a chance (a small one) that Canada could step in and do the hosting in Vancouver, Toronto and Ottawa, but that would be called ‘smirking speculation’ But the setting behind these settings is that Tourism will likely collapse in America and at that point the Banks of Wall Street will cancel the Credit Cards of America for a really long time and that will set in motion a lot of cascading events all at the same time. Now if you would voice that this would never Tom’s Hardware gave us last week ‘Sam Altman backs away from OpenAI’s statements about possible U.S. gov’t AI industry bailouts — company continues to lobby for financial support from the industry’ If his AI is so spectastic  (a combination of Fantastic and Spectacular) why does he need a bailout? And when we consider this. Microsoft once gave the AI builder a value of a billion dollars and they blew that in under a year on over 600 engineers. So why didn’t Microsoft see that? 600 engineers leave a digital footprint and they have licensed software. Microsoft didn’t catch on? And as we see the ‘unification’ of Microsoft and OpenAI have a connection. Microsoft has an investment in the OpenAI Group PBC valued at approximately $135 billion, representing a 27% stake. So there is a need to ask questions and when that bubble goes, America gets to bail that Windows 3.1 vendor out.

As I see it, don’t ever put all your eggs in one basket and at this point America has all the eggs of its ‘kingdom’ in one plastic bag and it reckon that bag is showing rips and soon enough the eggs fall away into an abyss where Microsoft can’t get to it. The resources will flee to Google, IBM, Amazon and a few other places and it is the other places that will reap havoc on the American economy. So when the tally is made, America has a real problem and this administration called the storm over its own head and I am not alone feeling this way. When you consider the validation and verification of data, pretty much the first step in data related systems you can see that things do not add up and it will not take long for others to see that too. And in part the others will want to prove that THEIR data is sweet and the way they do that is to ask questions of the data of others. A tell tale sign that the bubble is about to implode and at present it is given at ‘Global AI spend to total US$1.5 trillion’ (source: ARNnet) but that puppy has been blown up to a lot more as the speculators that they have a great dane, so when that bubble implodes it will cost a whole lot of people a lot of money. I reckon that it will take until 2026/2027 to hit the walls. Even as Forbes gave us less than 24 hours ago ‘OpenAI Just Issued An AI Risk Warning. Your Job Could Be Impacted’ and they talk about ASI (too many now know that AI doesn’t exist) where we see “Superintelligence is also referred to as ASI (artificial superintelligence) which varies slightly from AGI (artificial general intelligence) in that it’s all about machines being able to exceed even the most advanced and highly gifted cognitive abilities, according to IBM.” And we also get “OpenAI acknowledges the potential dangers associated with advancing AI to this level, and they continue by making it clear what can be anticipated and what will be needed for this experiment to be a safe success” so these statements, now consider the simple facts of Data Verification and Data Validation, when these parts are missing any ‘super intelligence’ merely comes across as the village idiot. I can already see the Microsoft Copilot advertisement “We now offer the copilot with everyones favourite son, the village idiot Clippy II” (OK, I am being mean, I loved my clippy in the Office 95 days) but I reckon you are now getting clued in to the disaster that is coming? 

It isn’t merely the AI bubble, or the American economy, or any of these related settings. It is that they are happening almost at the same time, so a nasdaq screen where all the firms are shown in deep red showing a $10 trillion write-off is not out of the blue. That setting better be clear to anyone out there. This is merely my point of view and I might be wrong to read the data as it is, but I am not alone and more people are seeing the fringe of the speculative gold stream showing it Pyrite origins. Have a great day it is another 2 hours before Vancouver joins us on this Monday. Time for me to consider a nice cup of coffee (my personal drug of choice).

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Labels

That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things. 

And example can be seen in

And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.

But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.

The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions. 

The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.

So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best. 

Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved. 

As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won. 

It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.

Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.

Leave a comment

Filed under Finance, IT, Media, Politics, Science