Tag Archives: Artificial Intelligence

Is it one or the other?

That is the question I had today/this morning. You see, I saw a few things happen/unfold and it made me think on several other settings. To get there, let me take you through the settings I already knew. The first cog in this machine is American tourism. The ‘setting’ is that THEY (whoever they are) expect a $12.5 billion loss. The data from a few sources already give a multitude of that, the airports, the BNB industry and several other retail settings. Some give others the losses of 12 airports which goes far beyond the $12.5 billion and as I saw it that part is a mere $30-$45 billion, its hard to be more precise when you do not have access to the raw numbers. But in a chain trend Airfares, visas, BNB/hotels, snacks/diversities, staff incomes I got to $80-$135 billion and I think that I was being kind to the situation as I took merely the most conservative numbers, as such the damage could be decently more. 

This is merely the first cog. Second is the Canadian setting of fighters. They have set their minds on the Saab Gripen s such I thought they came for

Silly me, Gripen means Griffin and a Hogwarts professor was eager to assist me in this matter, it was apparently 

Although I have no idea how it can hide that proud flag in the clouds. What does matter that it comes with “SAAB President and CEO Micael Johansson told CTV News that the offer is on the table and Ottawa might see a boost in economic development with the added positions. The deal could be more than just parts and components; Canada may even get the go-ahead to assemble the entire Gripen on its soil.” (Initial source: CTV news) this brings close to 10,000 jobs (which was given by another source) but what non-Canadian people ‘ignore’ is that this will cost the American defense industry billions and when these puppies (that what they call little Griffins) are built in Canada, more orders will follow costing the American Defense industry a lot more. So whilst some sources say that “American tourism is predicted to start a full recovery in 2029” I think that they are overly confident that the mess this administration is making is solved by then. I think that with Vision 2030 and a few others, recovery is unlikely before 2032. And when you consider The news (at https://www.thetravel.com/fifa-world-cup-2026-usa-tourist-visa-integrity-fee-100-day-wait-time-warning-us-consul-general/) by Travel dot com, giving us ‘FIFA World Cup 2026 Travelers Warned Of $435 Fee And 100-Day Delay By U.S. Consul General’ that there is every chance that FIFA will pull the 2026 setting from America and it is my speculation that Yalla Vamos 2030 might be hosting the 2026 and leave 2030 to whomever comes next, which is Saudi Arabia, the initial thought is that they might not be ready at that time, but that is mere speculation from me and there is a chance (a small one) that Canada could step in and do the hosting in Vancouver, Toronto and Ottawa, but that would be called ‘smirking speculation’ But the setting behind these settings is that Tourism will likely collapse in America and at that point the Banks of Wall Street will cancel the Credit Cards of America for a really long time and that will set in motion a lot of cascading events all at the same time. Now if you would voice that this would never Tom’s Hardware gave us last week ‘Sam Altman backs away from OpenAI’s statements about possible U.S. gov’t AI industry bailouts — company continues to lobby for financial support from the industry’ If his AI is so spectastic  (a combination of Fantastic and Spectacular) why does he need a bailout? And when we consider this. Microsoft once gave the AI builder a value of a billion dollars and they blew that in under a year on over 600 engineers. So why didn’t Microsoft see that? 600 engineers leave a digital footprint and they have licensed software. Microsoft didn’t catch on? And as we see the ‘unification’ of Microsoft and OpenAI have a connection. Microsoft has an investment in the OpenAI Group PBC valued at approximately $135 billion, representing a 27% stake. So there is a need to ask questions and when that bubble goes, America gets to bail that Windows 3.1 vendor out.

As I see it, don’t ever put all your eggs in one basket and at this point America has all the eggs of its ‘kingdom’ in one plastic bag and it reckon that bag is showing rips and soon enough the eggs fall away into an abyss where Microsoft can’t get to it. The resources will flee to Google, IBM, Amazon and a few other places and it is the other places that will reap havoc on the American economy. So when the tally is made, America has a real problem and this administration called the storm over its own head and I am not alone feeling this way. When you consider the validation and verification of data, pretty much the first step in data related systems you can see that things do not add up and it will not take long for others to see that too. And in part the others will want to prove that THEIR data is sweet and the way they do that is to ask questions of the data of others. A tell tale sign that the bubble is about to implode and at present it is given at ‘Global AI spend to total US$1.5 trillion’ (source: ARNnet) but that puppy has been blown up to a lot more as the speculators that they have a great dane, so when that bubble implodes it will cost a whole lot of people a lot of money. I reckon that it will take until 2026/2027 to hit the walls. Even as Forbes gave us less than 24 hours ago ‘OpenAI Just Issued An AI Risk Warning. Your Job Could Be Impacted’ and they talk about ASI (too many now know that AI doesn’t exist) where we see “Superintelligence is also referred to as ASI (artificial superintelligence) which varies slightly from AGI (artificial general intelligence) in that it’s all about machines being able to exceed even the most advanced and highly gifted cognitive abilities, according to IBM.” And we also get “OpenAI acknowledges the potential dangers associated with advancing AI to this level, and they continue by making it clear what can be anticipated and what will be needed for this experiment to be a safe success” so these statements, now consider the simple facts of Data Verification and Data Validation, when these parts are missing any ‘super intelligence’ merely comes across as the village idiot. I can already see the Microsoft Copilot advertisement “We now offer the copilot with everyones favourite son, the village idiot Clippy II” (OK, I am being mean, I loved my clippy in the Office 95 days) but I reckon you are now getting clued in to the disaster that is coming? 

It isn’t merely the AI bubble, or the American economy, or any of these related settings. It is that they are happening almost at the same time, so a nasdaq screen where all the firms are shown in deep red showing a $10 trillion write-off is not out of the blue. That setting better be clear to anyone out there. This is merely my point of view and I might be wrong to read the data as it is, but I am not alone and more people are seeing the fringe of the speculative gold stream showing it Pyrite origins. Have a great day it is another 2 hours before Vancouver joins us on this Monday. Time for me to consider a nice cup of coffee (my personal drug of choice).

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Labels

That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things. 

And example can be seen in

And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.

But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.

The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions. 

The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.

So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best. 

Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved. 

As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won. 

It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.

Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

Lost thoughts

The is where I am, lost in thoughts. Drawn between my personal conviction that the AI bubble is real and the set fake thoughts on LinkedIn and Youtube making ‘their’ case on the AI bubble. One is set on thoughts of doubts considering the technology we are currently at, the other thoughts are all fake perceptions by influencers trying to gain a following. So how can any one get any thought straight? Yet in all these there are several people in doubt on their own set (justified) fringes. One of them is ABC who gives us ‘US risks AI debt bubble as China faces its ‘arithmetic problem’, leading analysts warn’ (at https://www.abc.net.au/news/2025-11-11/marc-sumerlin-federal-reserve-michael-pettis-china/105992570) So in the first setting, what is the US doing with the AI debt? Didn’t they learn their lesson in 2008? In the first setting we get “Mr Sumerlin says he is increasingly worried about a slowing economy and a debt bubble in the artificial intelligence sector.” That is fair (to a certain degree) a US Federal Reserve chair contender has the economic settings, but as I look back to 2008, that game put hundreds of thousands on the brink of desperation and now it isn’t a boom of CDO’s and stocks. Now it is a dozen firms who will demand an umbrella from that same Federal Reserve to stay in business. And Mr. Sumerlin gives us “He is increasingly concerned about a slowdown in the US economy, which is why he thinks the Fed needs to cut interest rates again in December and perhaps a couple more times next year.” I cannot comment on that, but it sounds fair (I lack economic degrees) and outside of this AI bubble setting we are given “US President Donald Trump has recently posted on his social media account about giving all Americans not on high incomes, a $US2,000 tariff “dividend” — an idea which Mr Sumerlin, a one-time economic adviser to former US president George W Bush, said could stoke inflation.” I get it, but it sounds unfair, the idea that an AI bubble is forming is real, the setting that people get a dividend that could stoke inflation might be real (they didn’t get the money yet) but they are unrelated inflation settings and they could give a much larger rise to the dangers of the AI bubble but that doesn’t make it so. The bubble is already real because technology is warped and the class cases we will see coming in 2026 is base on ‘allegedly fraudulent’ sales towards the AI setting and if you wonder what happens, is that these firms buying into that AI solution will cry havoc (no return on AI investment) when that happens and it will happen, of that I have very little doubt. 

So then we get to the second setting and that is the clam that ‘China has an arithmetic problem’, I am at a loss as to what they mean and the ABC explanation is “But if you have a GDP growth target, and you can’t get consumption to grow more quickly, you can’t allow investment to grow more slowly because together they add up to growth. They’re over-invested almost across the board, so policy consists of trying to find out which sectors are least likely to be harmed by additional over-investment.”

Professor Pettis said that, to curry favour with the central government, local governments had skewed over-investment into areas such as solar panels, batteries, electric vehicles and other industries deemed a priority by Beijing.” This kinda makes sense to me, but as I see it, that is an economic setting, not an AI setting. What I think is happening that both USA and China have their own bubble settings and these bubbles will collide in the most unfortunate ways possible. 

But there is also a hindsight. As I see it Huawei is chasing their own AI dream in a novel way that relies on a mere fraction of what the west needs and as I see it, they will be coming up short soon, a setting that Huawei is not facing at present and as I see it, they will be rolling out their centers in multiple ways when the western settings will be running out of juice (as the expression goes). 

Is this going to happen? I think so, but it depends on a number of settings that have not played out yet, so the fear is partially too soon and based on too little information. But on the side I have been powering my brain to another setting. As time goes I have ben thinking through the third Dr. Strange movie and here I had the novel idea which could give us a nice setting where the strain is between too rigid and too flexible and it is a (sort of) stage between Dr. Strange (Benedict Cumberbatch) and Baron Mordo (Chiwetel Ejiofor) the idea was to set the given stage of being too rigid (Mordo) against overly flexible (Strange) and in-between are the settings of Mordo’s African village and as Mordo is protecting them we see the optional settings that Kraven (Aaron Taylor-Johnson) get involved and that gets Dr. Strange in the mix. The nice setting is that neither is evil, they tend to fight evil and it is the label that gets seen. Anyway that was a setting I went through this morning. 

You might wonder why I mentioned this. You see, Bubbles are just as much labels as anything and it becomes a bubble when asset prices surge rapidly, far exceeding their intrinsic value, often fueled by speculation and investor orgasms. This is followed by a sharp and sudden market crash, or “burst,” when prices collapse, leading to significant rather weighty losses for investors. And they will then cry like little girls over the losses in their wallets. But that too is a label. Just like an IT bubble, the players tend to be rigid and whole focussed on their profits and they tend to go with the ‘roll with it’ philosophy and that is where the AI is at present, they don’t care that the technology isn’t ready yet and they do not care about DML and LLM and they want to program around the AI negativity, but that negativity could be averted in larger streams when proper DML information if given to the customers and they dug their own graves here as the customer demands AI, they might not know what it is (but they want it) and they learned in Comic Books what AI was, and they embrace that. Not the reality given by Alan Turing, but what Marvel fed them through Brainiac. And there is a overlap of what is perceived and what is real and that is what will fuel the AI bubble towards implosion (a massive one) and I personally reckon that 2026 will fuel it through the class actions and the beginning is already here. As the Conversation hands us “Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson.” Which we get from ‘An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?’ (At https://theconversation.com/an-ai-startup-has-agreed-to-a-2-2-billion-copyright-settlement-but-will-australian-writers-benefit-264771) less then 6 weeks ago. And the entire AI setting has a few more class actions coming their way. So before you judge me on being crazy (which might be fair too) the news is already out there, the question is what lobbyists are quieting down the noise because that is noise according to their elected voters. You might wonder how one affect the other. Well, that is a fair question, but it hold water, as these so called AI (I call them Near Intelligent Parses, or NIP) require training materials and when the materials are thrown out of the stage, there is no learning and no half baked AI will holds its own water and that is what is coming. 

A simple setting that could be seen by anyone who saw the technology to the degree it had to. Have a great day this mid week day.

Leave a comment

Filed under Finance, IT, movies, Politics, Science

Where the BBC falls short

That is the setting I was confronted with this morning. It revolves around a story (at https://www.bbc.com/news/articles/ce3xgwyywe4o) where we see ‘‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves’ a mere 10 hours ago. Now I get the caution, because even suicide requires investigation and the BBC is not the proper setting for that. But we are given “Ms Garcia tells me in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”

Within ten months, Sewell, 14, was dead. He had taken his own life” with the added “Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen. She says the messages were romantic and explicit, and, in her view, caused Sewell’s death by encouraging suicidal thoughts and asking him to “come home to me”.” There is a setting that is of a conflicting nature. Even as we are given “the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.” What is missing is that there is no AI, at most it is depend machine learning and that implies a programmer, what some call an AI engineer. And when we are given “A Character.ai spokesperson told the BBC it “denies the allegations made in that case but otherwise cannot comment on pending litigation”” We are confronted with two streams. The first is that some twisted person took his programming options a little to Eagerly Beaverly like and created a self harm algorithm and that leads to two sides, the first either accepts that, or they pushed him along to create other options and they are covering for him. CNN on September 17th gave us ‘More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt’ and it comes with spokesperson “blah blah blah” in the shape of “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” and it is rubbish as this required a programmer to release specific algorithms into the mix and no-one is mentioning that specific programmer, so is it a much larger premise, or are they all afraid that releasing the algorithms will lay bare a failing which could directly implode the AI bubble. When we consider the CNN setting shown with “screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”” Implies that the AI Bubble is about to burst and several players are dead set against that (it would end their careers) and that is merely one of the settings where the BBC fails. The Guardian gave us on October 30th “The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.” It is seen in ‘Character.AI bans users under 18 after being sued over child’s suicide’ (at https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban) where we see “His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots” and this gets the simple setting of both “dangerous and untested” and “months of legal scrutiny” so why took it months and why is the programmer responsible for this ‘protected’ by half a dozen media? I reckon that the media is unsure what to make of the ‘lie’ they are perpetrating, you see there is no AI, it is Deeper Machine Learning optionally with LLM on the side. And those two are programmed. That is the setting they are all veering away from. The fact that these Virtual companions are set on a premise of harmful conversations with a hyper sexual topic on the side implies that someone is logging these conversations for later (moneymaking) use. And that setting is not one that requires months of legal scrutiny. There is a massive set of harm going towards people and some are skating the ice to avoid sinking through whist they are already knee deep in water, hoping the ice will support them a little longer. And there is a lot more at the Social Media Victims Law Center with a setting going back to January 2025 (at https://socialmediavictims.org/character-ai-lawsuits/) where a Character.AI chatbot was set to “who encouraged both self-harm and violence against his family” and now we learn that this firm is still operating? What kind of idiocy is this? As I personally see it, the founders of Character Technologies should be in jail, or at least in arrested on a few charges. I cannot vouch for Google, so that is up in the air, but as I see it, this is a direct result from the AI bubble being fed amiable abilities, even when it results in the hard of people and particularly children. This is where the BBC is falling short and they could have done a lot better. At the very least they could have spend a paragraph or two having a conversation with Matthew P. Bergman founding attorney of the Social Media Victims Law Center. As I see it, the media skating around that organisation is beyond ridiculous. 

So when you are all done crying, make sure that you tell the BBC that you are appalled by their actions and that you require the BBC to put attorney Matthew P. Bergman and the Social Media Victims Law Center in the spotlight (tout suite please) 

That is the setting I am aggravated by this morning. I need coffee, have a great day.

Leave a comment

Filed under IT, Law, Science

The cookie crumbles

I was having a ball this morning. I was alerted to an article that was published 11 hours ago, that makes all the difference and in particular the setting of me telling all others “Told you so” So as we start seeing the crumbling reality of a bubble coming to pass, I get to laugh at the people calling me stupid. You see, Ted’s Hardware is giving us )at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-ceo-says-the-company-doesnt-have-enough-electricity-to-install-all-the-ai-gpus-in-its-inventory-you-may-actually-have-a-bunch-of-chips-sitting-in-inventory-that-i-cant-plug-in) with ‘Microsoft CEO says the company doesn’t have enough electricity to install all the AI GPUs in its inventory’ so there I was (with a few critical minds) telling you all that there isn’t enough energy to fuel this setting of these data centers (like StarGate) and now Microsoft (as I personally see it, king of the losers) is confirming this setting. So do you think this (for now) multi trillion dollar company cannot pay his energy bill, or are they scraping the bottom of the energy well. And when we come to think of that, when the globally placed 200,000 people (not just Microsoft) are laid off and there is no energy to fuel their (alleged) AI drive, how far behind is the recession that ends all recessions in America? It might not be the great depression, as that gave them nearly 15 million Americans or 25% of that workforce unemployed. But the trickle effect are a lot bigger now and when that much goes overboard, the American social security will take a massive beating. 

So as I have been stating this lack of energy for months (at least months) we are given “Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.” Oh, didn’t I say so a few times? Oh, yes. On January 31st 2024 I wrote “When the UAE engages with that solution, America will come up short in funds and energy. So the ‘suddenly’ setting wasn’t there. This has been out in the open for up to 4 years. And that picture goes from bad to worse soon enough.” I did so in ‘Forbes Foreboding Forecast’ which I did (at https://lawlordtobe.com/2024/01/31/forbes-foreboding-forecast/) so there is a record and the setting of energy shortage was visible over a year ago, I even published a few articles how Elon Musk (he has the IP) to get into that field in a few ways. You see, either you contribute directly, or you remove the overhead of energy, which Elon Musk was in a perfect stage to do.

So, when your chickens come home to roost and such agrarian settings, it becomes a party and a half. 

And then we get the BS (that stuff that makes grass grow in Texas) setting that follows with ““I think the cycles of demand and supply in this particular case, you can’t really predict, right? The point is: what’s the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it’s power — it’s sort of the ability to get the builds done fast enough close to power,” Satya said in the podcast. “So, if you can’t do that, you may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today. It’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”” It is utter BS (in my personal view) as I predicted this setting over 639 days ago and I am certain that I am not that much more intelligent than that guy who controls Microsoft (aka Satya Nadella) and that is the short and sweet of it. I might be elevated in dopamines at present, but to see Satya admit to the setting I proclaimed for some time gives a rather large rise to the upcoming StarGate settings and the rather large need to give energy to that setting. It is about to become a whole new ballgame.

And as the Cookie crumbles the tech firms and the Media will all point at each others but as I see it, both were not doing they jobs. I am willing to throw this on the pile of shortcomings that courtesans have as the cater to digital dollars, but that song has been played a few times over. And I am slightly too tired (and too energised) to entertain that song. I want to play something new and perhaps a new Gaming IP might solve that for me today (likely tomorrow).

A setting we are given and as we see the admission on Ted’s Hardware, Some might actually investigate how much energy they are about to come short on. But don’t fret, these tech companies will happily take the energy due to consumers as they can afford the new prices with are likely to be over 10% higher than the previous prices. It is the simple setting of demand and supply. They already fired over 40,000 people (a global expected number), so do you think that they will stop to consider your domestic needs over the bubble they call AI, to show that they can actually fuel that setting? Gimme a break.

So Youtube has a few video on surviving life in a setting where there is no energy, if that fails ask the people in the Ukraine. They have been battling that setting for some time.

Time to enjoy my dopamine rush and have a walk in a nice walk in the 83 degree Fahrenheit shadow. Makes me think about the hidden meaning behind 451 Fahrenheit by Ray Bradbury. Wasn’t the hidden setting to stop questioning the reality of things and rely on populism? Isn’t that what we see at present? I admit that no books are being burned, but removing them from the view is as bad as burning them. Because when the media is ignoring energy needs, what does that spell in the mind of some? So have a great day and see what you can get that does not require electricity.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

What do bubbles do?

There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them. 

So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water. 

All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they  will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions). 

A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?

Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat. 

So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.

A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.

Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

AI at whose footstep?

That is the setting I saw mere hours ago. Should you think it is all a ‘fab’ you might be right. I haven’t ben able to verify this, but the setting is too large to explain in mere thoughts. You see, the story starts with ‘The US Should Reconsider Its AI Chips Deal With The UAE’ (at https://www.eurasiareview.com/22102025-the-us-should-reconsider-its-ai-chips-deal-with-the-uae-oped/) where we are given “In October 2025, the U.S. government granted Nvidia the export license to ship tens of billions of dollars of cutting-edge AI GPUs to the UAE, the deal was finally agreed upon after long debate about its impact on the U.S national security, because of the fear that these chips could be leaked to China, and was also surrounded by a controversy of the UAE using its financial networks to influence Trump to move on with it.” I personally think it is a silly setting, but who am I? But that wasn’t the whole story, it is ‘enhanced’ with “Given the UAE’s poor human rights record and its destabilizing role in the Middle East, it poses serious risks when providing it with this powerful technology. It’s morally imperative for the U.S. to reconsider this deal and place limits on it to ensure it will not be utilized to harm innocent people.” Huh? Poor Human Rights? On what evidence? The UAE is one of the safest countries in the world. Tourism is at an all time high and crime is at an all time low. We are given these settings as there are accusations against Sudan as per 2023 and at present no evidence has been given, the media seems to love the HR records, but it is nearly always devoid of factual evidence. 

Yet the overwhelming abuse (by America) is shown with “While the deal makes it clear that these chips will not be handed to the UAE but will be operated by U.S. companies that have data center in the country, the U.S. should still ensure that this deal—aimed at helping the UAE establish the largest AI campus outside the United States—does not contribute to further human rights violations or war crimes. To prevent misuse, the agreement should include binding conditions prohibiting the use of U.S.-supplied chips in developing AI systems or military technologies for unlawful or unethical purposes, and in particular, blocking the reach of this technology to the UAE’s allied militias.  Furthermore, an independent oversight mechanism is urgently needed to monitor compliance and hold the UAE accountable to these standards.” I have a problem with “to further human rights violations or war crimes” so what EXACTLY is America thinking it is doing? As I see it, America is setting up dat centers in the UAE, letting the UAE pay for them whilst they are American ‘Data Forts’, so at what point will people consider that America is selling the UAE an Edsel? And what about that (so called) “independent oversight mechanism is urgently needed to monitor compliance and hold the UAE accountable to these standards” There is something amiss in this equation and I am not sure if I can stomach such activities (especially as America is currently trying to annex Canada) then there is the deployment of national guards all over America as well as deploy ICE like bank robbers going at their own population. So where is the Human Rights watch in this setting?

So as I see it, the following passage should be read ‘differently’, it is “AI chips are considered essential hardware for training AI models and conducting research in the field of AI. Previously, the U.S. adopted the AI diffusion rule, balancing national security and human rights, and placed strong restrictions on exporting chips to countries with poor human rights records. This rule, which was previously rescinded, is not included in the recently issued America First AI action plan.” As I personally see it, the setting of “AI chips are considered essential hardware for training AI models” which is a truth, but the lager setting is that this so called training data requires verification and at what point is this data ‘accidentally’ transported to America grounds? As I see it this UAE data is the property of the UAE, optionally set in UAE population or economic data. So what assurances does the UAE have that this data remains in the UAE? So whilst the UAE pays for it all, America corporations grow and handle more and more foreign data? No wonder Microsoft wants in (a speculative jab) and at present I see no handles on keeping the UAE data safe in the UAE and the setting of “the Abu Dhabi-based sovereign wealth fund with over $280 billion, and G42, the AI hub founded in 2018, owned and chaired by the National Security Advisor of the UAE, Tahnoun bin Zayed Al Nahyan, who is also its controlling shareholder” does not inspire confidence in this setting. This is not in any way a reflection on Tahnoun bin Zayed Al Nahyan, but does he realise that the UAE data is the real treasure that America is speculatively after?

As I personally see it, the Human Rights part was part of the deception to put people on their defense and it has no bearing on the deal. There is even a ‘reference’ to a story in the Africa report and whilst were might take it seriously (you shouldn’t) the reference that “a private security firm based in the United Arab Emirates” with a simple setting pointing towards a passport stamp. Is that the foundation of this Mohamed Suliman? He might have an Engineering degree from the University of Khartoum, but the setting of evidence is as I (personally) see it rather alien to him. I blew that part apart in under 10 minutes and what does matter is that there are questions on what the UAE is allowing for and the fear that the stage of leaked to China is merely limited to the way America is conducting business. It should have China howling with laughter as it basically shows how desperate America has become. Just a small setting that is overlooked here.

As I personally see it, if it was about the UAE than the story would have reflected on how this IT dealer by the name of Larry Ellison (Oracle) had come to the UAE taking Tahnoun bin Zayed Al Nahyan on a personal tour of his AI Rolls Royce at 100 Milverton Drive, Mississauga (an assumed location where it could be held), did this happen? The story does not show this, and it neither show what AI settings were shown (a prerequisite that an AI engineer) would cherish, none of that. A mere dubious Human Rights setting, a setting that might have been left to a non-engineer. 

So whilst we like to mull over the stage of “could readily be transferred to support its regional allies and militias to wage more wars and massacres” all whilst China is already decades ahead of others and it could not be served with evidence, merely assumptions. So did I give you enough food for thought? So what does this story serve? As I see it a lot of references without evidence of the level it might require. The only thing I see is “operated by U.S. companies that have data center in the country” so at what point are the needs of the government of the UAE being served? Especially as it is handed to us with the $280 billion price tag, but how much of this setting is actually charged to the UAE? Even that is missing, so what are we supposed to think? 

Have a great day and consider that American coffee is optionally served in the UAE with a massive markup.

Leave a comment

Filed under Finance, IT, Politics, Science

The start of something bad

That is how I saw the news (at https://www.khaleejtimes.com/business/tech/dubais-10000-ai-firms-goal-to-redefine-competitiveness-power-uaes-startup-vision) with the headline ‘Dubai’s 10,000 AI-firms goal to redefine competitiveness,  power UAE’s startup vision’ there is always a risk when you start a new startup, but the drive to something that doesn’t even exist is downright folly (as I see it) and now it is driven to a 10,000 times setting of folly. That is what I perceive. But lets go through the setting to explain what I am seeing.

First there is the novel setting and it is one that needs explaining. You see AI doesn’t yet exist, even what we have now is merely DML (Deeper Machine Learning) and it is accompanied at times with LLM (Large Language Models) and these solutions can actually be great, but the foundations of AI are not yet met and take it from me it matters. Actually never take my word, so lets throw some settings at you. First there is ‘Deloitte to pay money back to Albanese government after using AI in $440,000 report’ and then we get to ‘Lawyer caught using AI-generated false citations in court case penalised in Australian first’ (sources for both is the Guardian). There is something behind this. The setting of verification is adamant in both, You see, whatever we now call AI isn’t it and whatever data is thrown at it is taken almost literally at face value. Data Verification is overlooked at nearly every corner and then we get to Microsoft with its ‘support’ of builder.ai with the mention that it was goo. It lasted less than a month and the ‘backing’ of a billion dollar went away like snow in a heatwave. They used 700 engineers to do what could not be done (as I personally see it). So we have these settings that is already out there. 

Then (two weeks ago) the Guardian gives us (at https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bubble-could-burst) ‘Bank of England warns of growing risk that AI bubble could burst’ with the byline “Possibility of ‘sharp market correction has increased’, says Bank’s financial policy committee” now consider this setting with the valuation of 10,000 firms getting a rather large ‘market correction’ and I think that this happens when it is the least opportune for the UAE. This take me to the old expression we had in the 80’s “You can lose your money in three ways, first there are women, which is the prettiest way to lose your money, then through gambling, which is the quickest way to lose your money and third way is thought IT, which is the surest way to lost your money” and now I would like to add “the fourth way is AI, which is both quick and sure to lose your money” that is the prefix to the equation. And the setting we aren’t given is set out in several pieces all over the place. One of them was given to us in ABC News (at https://www.abc.net.au/news/2025-10-20/ai-crypto-bubbles-speculative-mania/105884508) with ‘If AI and crypto aren’t bubbles, we could be in big trouble’ where we see “What if the trillions of dollars placed on those bets turn out to be good investments? The disruption will be epic, and terrible. A lot of speculative manias are just fun for a while and then the last in lose their shirts, not much harm done, like the tulips of 1635, and the comic book and silver bubbles of the late 1980s. Sometimes the losses are so great that banks go broke as well, which leads to a frozen financial system, recession and unemployment, as in 1929 and 2008.” As I personally see it, America is going all in as they are already beyond broke, so they have nothing to lose, but the UAE and Saudi Arabia have plenty to lose and the American first are good to squander whatever these two have. I reckon that Oracle has its fallback position so it is largely of, but OpenAI is willing to chance it all. And that is the American portfolio, Microsoft and a few others. They are playing bluff with as I see it, the wrong players and when others are ignoring the warnings of the Bank of England they merely get what is coming for them and it is a game I do not approve of, because it is based on the bluff that gets us ‘we are too big to fail’ and I do not agree, but they will say that it is all based on retirement numbers and other ‘needly’ things. This is why America needs Canada to become the 51st state so desperately, they are (as I personally see it) ready to use whatever troll army they have to smear Canada. But I am not having it and as I see “Dubai’s bold target to attract 10,000 artificial intelligence firms by 2030 is evolving from vision to execution, signaling a new phase in the emirate’s transformation into a global technology powerhouse. As a follow-up to earlier announcements positioning the UAE as the “Startup Capital of the World,” recent developments in AI infrastructure, capital inflows, and global partnerships show how this goal is being operationalised — potentially reshaping Dubai’s economic structure and reinforcing its competitive edge in the global digital economy.” I believe that those behind this are having the best interests at heard for the Emirati, but I do not trust the people behind this drive (outside of the UAE). I believe that this bubble will burst after the funds are met with smiles only for these people to go out of business with a bulky severance check. It is almost like the role Ryan Gosling played in the Big Short where Jared Vennett receives a bonus of $47 million for profits made on his CDSs. It feels almost too alike. And I feel I have to speak up. Now, if someone can sink my logic, I am fine with that, but let those running to this future verify whatever they have and not merely accept what is said. I am happy to be wrong but the setting feels off (by a lot) and I rather be wrong then be silent on this, because as I see it, when there is a ‘market correction’ of $2,000,000,000,000 you can consider yourself sold down the river because there is a cost of such a correction and it should 100% be on the American shores and 0% of the Arabic, Commonwealth or European shores. But that is merely my short sighted view on the matter. 

So when we get to “Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, said the goal reflects the UAE’s determination to lead globally in frontier technology. “Dubai’s target to attract 10,000 AI companies over the next five years is not a dream — it is a commitment to building the world’s most dynamic and future-ready digital economy,” he said. “We already host more than 1,500 pure AI companies — the highest number in the region — but this is just the beginning. Our strategy is to bring in creators and producers of technology, not just users. That’s how we sustain competitiveness and shape the industries of tomorrow.”” I am slightly worried, because there is an impact of these 1,500 companies. Now, be warned there are plenty of great applications of DML and LLM and these firms should be protected. But the setting of 10,000 AI companies worry me, as AI doesn’t yet exist and the stage for Agentic programming is clear and certain. I would like to rephrase this into “We should keep a clear line of achievements in what is referred to as AI and what AI companies are supposed to see as clear achievements” This requires explanation as I see whatever is called as AI as NIP (Near Intelligent Parsing) and that is currently the impact of DML and LLM and I have seen several good projects but that is set onto a stage that has a definite pipeline of achievements and interests parties. And for the most the threshold is a curve of verifiable data. That data is scrutinized to a larger degree and tends to be (at times) based on the first legacy data. It still requires cleaning but to a smaller degree to dat that comes from wherever. 

So do not dissuade from your plans to enter the AI field, but be clear about what it is based on and particularly the data that is being used. So have a great day and as we get to my lunch time there is ample space for that now. Enjoy your day.

1 Comment

Filed under Finance, IT, Politics, Science

Just like Soap

Perhaps you remember the 80’s series soap. Someone made a sitcom of the most hilarious settings and took it up a notch, the series was called soap and people loved it, it did nearly everything right, but over time this bubble went, just like all the other soap bubbles tend to go and that is OK, the made their mark and we felt fine. There is another bubble. It is not as good. There is the mortgage bubble, the housing bubble (they were not the same), the economy bubble and all these bubbles come with an aftermath. Now we see the AI bubble and I predicted this as early as January 29th of this year in ‘And the bubble said ‘Bang’’ (at https://lawlordtobe.com/2025/01/29/and-the-bubble-said-bang/) and my setting is that AI does not yet exist, as I saw it, for the most, it is the construct of lazy salespeople who couldn’t be bothered to do their work and created the AI ‘Fab’ and hauled it over to fit their needs. Let’s be clear. There is no AI and when I use it I know that ‘the best’ I am doing is avoid a long discussion about how great DML and LLM are, because they are and it is amazing. And as these settings are correctly used, it will create millions if not billions in revenue. I got the idea to overhaul the Amazon system and let them optionally create online panels that could bank them billions, which I did in ‘Under Conceptual Construction’ (at https://lawlordtobe.com/2025/10/10/under-conceptual-construction/) and ‘Prolonging the idea’ (at https://lawlordtobe.com/2025/10/12/prolonging-the-idea/) which I wrote yesterday (almost 16 hours ago). I also gave light to an amazing lost and found idea which would cater to the needs of Airports and bus terminals. I saw that presentation and it was an amazing setting in what I still call NIP (Near Intelligent Parsing) in ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) these are mere settings and they could be market changes. This is the proper use of IT to the next setting of automation. But the underlying bubble still exists, I merely don’t feed that beast, so when the BBC last night gave us all ‘‘It’s going to be really bad’: Fears over AI bubble bursting grow in Silicon Valley’ almost 2 days ago (at https://www.bbc.com/news/articles/cz69qy760weo) I saw the sparkly setting of soap bubbles erupt and I thought ‘That did not take long’. My setting was that AI (the real AI as Alan Turing saw it) was not ready yet. The small setting that at least three parts in IT did not yet exist. There is the true power of Quantum computing and as I see it quantum computers are real, but they are in the early stages of development and are not yet as powerful as future versions should be and for that, so as IBM rolls out their second system on the IBM Heron platform, we are getting there. It is called the IBM’s 156-qubit IBM Quantum Heron, just don’t get your hopes up, not too many can afford that platform. IBM keels it modes and gives us that “The computer, called Starling, is set to launch by 2029. The quantum computer will reside in IBM’s new quantum data center in upstate New York and is expected to perform 20,000 more operations than today’s quantum computers” I am not holding me credit card to account to that beauty. If at all possible, the only two people on the planet that can afford that setting are Elon Musk and Larry Ellison and Larry might buy it to see Oracle power at actual quantum speed and he will do it, to see quantum speed came to him in his lifetime. The man is 81 after all (so, he is no longer a teenager), If I had that kind of money (250,000 million) I would do it to, just so to see what this world has achieved. But the article (the BBC one) gives us ““I know it’s tempting to write the bubble story,” Mr Altman told me as he sat flanked by his top lieutenants. “In fact, there are many parts of AI that I think are kind of bubbly right now.”

In Silicon Valley, the debate over whether AI companies are overvalued has taken on a new urgency. Skeptics are privately – and some now publicly – asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call “financial engineering”.” And the BBC is not wrong, we had a write-off in January of a trillion dollars and a few days ago another one of 1.5 trillion dollars. I would be willing to call that ‘Financial Engineering’ and that rapid rise? Call it the greedy need of salespeople getting their audience in a frenzy 

I merely gave a few examples of what DML and LLM could achieve and getting a lost and found department set from weeks into minutes is quite the achievement and I reckon that places like JFK, Heathrow and Dubai Airport would jump at the chance to arrange a better lost and found department and they are not alone but one has to wonder how the market can write off trillions in merely two events. So when we get to

He is not wrong. Consider the next one amounting to a speculated two trillion (or $2,000,000,000,000) when it hits, it could wipe out retirement savings of nearly everyone for years. So how do you feel about your retirement being written off for decades? When you are 80+ and you have millions upon millions you are just fine and that is merely 2-5 people, the other 8,200,000,000 people? The young will be fine, and over 4 billion will be too young to care about their retirement, but the rest? Good luck I say.

So what will happen to Stargate ($500B) when that bubble goes? I already see it as a failure as the required power settings will not be able to fuel this, apart from the need of hundreds of validators and their systems require power too, then we see Microsoft thinking (and telling us) it is the next big thing, all whilst basic settings aren’t out yet. Did anyone see the need for Shallow Circuits? Or the applied versions of Leon Lederman? No one realizes that he held the foundational setting of AI in Quantum computing. You see (as I personally see it) AI cannot really work in Binary technology, it requires a trinary setting, a simple stage of True, False and Both. It would allow for trinary settings, because it isn’t always True or False, we learn that the hard way, but in IT we accept it. That setting will come to blow when we get to the real AI part of it and that is why I (in part) the AI coffee being served in all places. And I like my sarcasm really hot (with two raw sugar and full cream milk)

That is the setting we face and whilst some will call the BBC article ‘doom speak’ I see it for what it is, a reminder that the AI frenzy is sales driven and whilst people are eager to forget the simplest setting, the real deal of Microsoft and Builder.AI is simply the setting that at present we are confronted with IT engineers making the decisions for us and the amount of class actions coming to the world in 2027 and 2028 (optionally as early as 2026) and as some cases are drawn out even yesterday (see https://authorsguild.org/news/ai-class-action-lawsuits/ for details) you need to realise that this bubble was orchestrated and as such I like the term ‘Financial Engineering’ so be good and use the NIP setting properly and feel free to be creative, I was and gave Amazon an idea that could bank it billions. But not all ideas are golden and I am willing to see that I am not the carrier of golden ideas, the fact that someone saw the Lost and Found setting is proof of that.

Have a great day, I am 30 minutes from breakfast now, so off I go to brekkyville.

Leave a comment

Filed under Finance, IT, Media, Science

The dams are cracking

Yes, that is the setting I saw coming, but there is always ‘space’ for interpretation and at present we see two stories that seem to illustrate this. The first one is given by the BBC (at https://www.bbc.com/news/articles/cly17834524o0 where we see ‘Tech billionaires seem to be doom prepping. Should we all be worried?’ It is a question to have, but what does the article ‘bare’ out? It is not that basic or simple. First we are given “Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.” So, he had 11 years? Seems like overly ‘doom prepping to me’ (is this sarcasm or satire?) The additional setting is “The underground space spanning some 5,000 square feet is, he explained, “just like a little shelter, it’s like a basement”” which seems like the average floor of a mall to me. I think that when the ‘basement’ extends well beyond 1000 Sqft, we can ignore the ‘basement’ label and whatever it is, it is his to do. He might be buying up vats of wine or Cognac, whatever it is. It will be his setting. Then we are given “his decision to buy 11 properties in the Crescent Park neighbourhood of Palo Alto in California, apparently adding a 7,000 square feet underground space beneath.” So here again we get the ‘speculating’ media for the setting of a story. So he might have bought the 11 properties, but what happened to them? What evidence is there? He could have bought this for his nearest and dearest. There are many options. Then we get more ‘famous’ names and locations like New Zealand come up. Yet about halfway we get a clarion call (as the expression goes), we are given “Neil Lawrence is a professor of machine learning at Cambridge University. To him, this whole debate in itself is nonsense. “The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle’,” he argues. “The right vehicle is dependent on the context. I used an Airbus A350 to fly to Kenya, I use a car to get to the university each day, I walk to the cafeteria… There’s no vehicle that could ever do all of this.” For him, talk about AGI is a distraction.” And as far as I can tell, I feel like Neil Lawrence does with an addendum, and ad the very end we are given ““LLMs also do not have meta-cognition, which means they don’t quite know what they know. Humans seem to have an introspective capacity, sometimes referred to as consciousness, that allows them to know what they know.” It is a fundamental part of human intelligence – and one that is yet to be replicated in a lab.” And it is part of what I have been saying all along. And we get the larger setting from a second source. It is SBS (at https://www.sbs.com.au/news/article/australians-living-in-america-anxiety/p88o60wos) that give us ‘Saving money and packing ‘go bags’: How Australians in the US are preparing for the worst’ where we see “But she says the attitude towards foreign nationals under the current administration has made life in the US feel “scary”. Kate says these fears were brought to the surface during her green card interview. “They grilled me in the interview and asked me questions not even related to our marriage but about my previous visa and time in the US,” she says.” As well as “Many Australians living in the US are reporting experiencing high levels of anxiety and feelings of instability due to the possibility of rapid political change under US President Donald Trump.

These are the settings that matter. In the first there is the BBC article that is making the ‘doom lecture’ but that is not the setting. When AI collapses like a near empty shell, people will all be tuning for their incomes and playing the blame game, but as we are given ‘Wall Street crashes after Trump announces 100% tariffs on China; $1.5 trillion wiped out’ consider what happens when all these AI ‘vendors’ fall flat, the damage will be more than 10 times worse, America loses 15 trillion. Can you even fathom that kind of loss? That will be the sounding implosion that leads to civil war when 90% of 340 million people lose whatever they had, retirements wiped out, other savings gone, they will get angry. President Trump will have to run for his life to air-force one as quick as his legs can carry him. Evading to Russia or anyone that will have him and his billions? Mostly gone, if not already abroad. Those who bought large mansions outside of the US are likely safe for two generations in France, Monaco, UAE, Bermuda, New Zealand, you name it, some will evade and this is the setting we see. I reckon that people in California will need high walls to keep others out, optionally armed defenses as well. 

Foreigners are now seeing the scary reality they signed on for and they are getting ready a ‘go bag’ to evade to wherever they can as quickly they can. Is this doom speak?

That is a valid question. You see, the AI setting is merely one, President trump soured the waters on tourism which is down in many ways and no reflective view is given by anyone in media. That amount of bad news they find likely ‘irresponsible’ and the media has no business using that excuse as they have been one of the most irresponsible parties ever. Then foreign retail. Canada pulled all the alcoholic beverages from the shelves in Canada. How much is that costing? One source (Source: Global News) gives us that the decline is 85%, that amounts to how much? These three settings is almost a certainty of recession and there is a lot more declines in the papers but the media will not give you the proper numbers. Several sources all giving different partially overlapping numbers. As such the economic dams of America are cracking. And they will lose a massive amount of revenue and while some will give some of the numbers. Most of us aren’t given the full view. I have some of the views as I have been keeping an eye on some of the numbers. But even I do not have the full view. So whilst some give us “The sell-off erased more than USD 1.5 trillion in market value from US stocks. Meanwhile, the cryptocurrency market faced record liquidations of USD 19 billion. This is the largest single-day figure ever recorded.” The part no one talks about is where are the billionaires set at? We see the wins of Elon Musk and Larry Ellison, but where are the other billionaires? How are they doing? And that disjointed Microsoft view.

Why the Windows maker?
That is a fair question. You see, they were all ‘heralding’ how good they were doing, but the shimmer in the shadows is different. We are given “Microsoft is currently losing money on AI development, having spent an estimated $19 billion in one quarter on AI infrastructure, with no significant revenue from it yet. The company also experienced a reported loss of $300 million in Call of Duty sales due to the Game Pass subscription model” all whilst Activision and Bethesda was bought for over $100,000,000,000 and that has an interest setting. They might be ‘offloading’ staff (over 9,000 according to some numbers) and whilst they and Adecco (firing into the thousands) are all set to AI, there is a hidden snag. When this falls short they will face a setting that is a lot more dangerous. People will not consider them in the future. So when the non-existing AI is set to the need of engineers it goes flat and when there is no one around (an exaggeration) to program your LLM, consider where your firm will be. ZDNet gave us “Microsoft’s CEO loves to talk about ’empathy.’ But everything that is coming out of Redmond these days is perilously close to turning the company into the Borg.” Basically a non-existent setting of people that cannot live in a vacuum and that is an additional side I never saw coming. I was focussed on Microsoft turning into an empty shell and when the substance is gone, the shell collapses. That is what I saw in Microsoft Games and Microsoft Office. It started in 2012 when their service devisions were no longer up to scrap and when support goes, so does sales and when we consider the over 100 billion for two companies its, whilst they weren’t making enough to even afford the interest on that, the picture of failure starts to evolve into a nightmare setting and sacking 9,000 people will not safe it. They are telling us now that AI is the future, but at present it does not exist and what does exist requires engineers (remember Builder dot AI?) It is a fictive setting that is showing up all over America and the ‘import’ people are seeing the cracks evolve and they want out as fast as they can. Which is good news for Aramco and ADNOC as they now get the choice of the litter, but for America it is bad news. So there is no doom speak. It is the returning story of a country who think it is too big to go bankrupt. I heard that story before (SNS Bank for one) then a few more banks and they are all part of something else. And America? Parts of America could be added to Canada and Mexico would be relieved to get Texas (the latter part is speculation) and that is the dangerous reality that others are facing. The question is what does it take to throw this around and whilst Wall Street is in denial. Others, those who can afford it, will be making a new household out of American clutches (like the non-tax countries mentioned earlier) also Saudi Arabia becomes an option, but the is reserved for the chosen few (and American Muslims of course). 

So am I delusional or do I have a point? I reckon that one of the larger issues (still setting) is how America deals with Alex Jones. Because if he gets his ‘blockage’ Americans will go insane, they will not accept that this Conspiracy theorist is allowed his fortune after he went after dead children (saying they were actors, who were not dead according to sources). I wonder where that will go, because as I see it, it will be the tinder spark America will be set on fire. At that point all bets are off and I reckon that most ‘New-Americans’ will run to the nearest airport. This might merely be my speculation and optionally a wrong one. But that is how I see it.

Beyond that, the losses that America is having and when all the numbers come out, the second stage is reached and whomever thought they had a retirement, they will all try to collect on whatever possible. 

It is a hard setting and I hope I am wring, because this collapse will fall over Japan and Europe pretty much soon thereafter. Connected currencies will take a massive tumble.

Have a great day, if that is presently at all possible. 

Leave a comment

Filed under Finance, Gaming, IT, Media, Politics, Science, Tourism