Tag Archives: LLM

The start of something bad

That is how I saw the news (at https://www.khaleejtimes.com/business/tech/dubais-10000-ai-firms-goal-to-redefine-competitiveness-power-uaes-startup-vision) with the headline ‘Dubai’s 10,000 AI-firms goal to redefine competitiveness,  power UAE’s startup vision’ there is always a risk when you start a new startup, but the drive to something that doesn’t even exist is downright folly (as I see it) and now it is driven to a 10,000 times setting of folly. That is what I perceive. But lets go through the setting to explain what I am seeing.

First there is the novel setting and it is one that needs explaining. You see AI doesn’t yet exist, even what we have now is merely DML (Deeper Machine Learning) and it is accompanied at times with LLM (Large Language Models) and these solutions can actually be great, but the foundations of AI are not yet met and take it from me it matters. Actually never take my word, so lets throw some settings at you. First there is ‘Deloitte to pay money back to Albanese government after using AI in $440,000 report’ and then we get to ‘Lawyer caught using AI-generated false citations in court case penalised in Australian first’ (sources for both is the Guardian). There is something behind this. The setting of verification is adamant in both, You see, whatever we now call AI isn’t it and whatever data is thrown at it is taken almost literally at face value. Data Verification is overlooked at nearly every corner and then we get to Microsoft with its ‘support’ of builder.ai with the mention that it was goo. It lasted less than a month and the ‘backing’ of a billion dollar went away like snow in a heatwave. They used 700 engineers to do what could not be done (as I personally see it). So we have these settings that is already out there. 

Then (two weeks ago) the Guardian gives us (at https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bubble-could-burst) ‘Bank of England warns of growing risk that AI bubble could burst’ with the byline “Possibility of ‘sharp market correction has increased’, says Bank’s financial policy committee” now consider this setting with the valuation of 10,000 firms getting a rather large ‘market correction’ and I think that this happens when it is the least opportune for the UAE. This take me to the old expression we had in the 80’s “You can lose your money in three ways, first there are women, which is the prettiest way to lose your money, then through gambling, which is the quickest way to lose your money and third way is thought IT, which is the surest way to lost your money” and now I would like to add “the fourth way is AI, which is both quick and sure to lose your money” that is the prefix to the equation. And the setting we aren’t given is set out in several pieces all over the place. One of them was given to us in ABC News (at https://www.abc.net.au/news/2025-10-20/ai-crypto-bubbles-speculative-mania/105884508) with ‘If AI and crypto aren’t bubbles, we could be in big trouble’ where we see “What if the trillions of dollars placed on those bets turn out to be good investments? The disruption will be epic, and terrible. A lot of speculative manias are just fun for a while and then the last in lose their shirts, not much harm done, like the tulips of 1635, and the comic book and silver bubbles of the late 1980s. Sometimes the losses are so great that banks go broke as well, which leads to a frozen financial system, recession and unemployment, as in 1929 and 2008.” As I personally see it, America is going all in as they are already beyond broke, so they have nothing to lose, but the UAE and Saudi Arabia have plenty to lose and the American first are good to squander whatever these two have. I reckon that Oracle has its fallback position so it is largely of, but OpenAI is willing to chance it all. And that is the American portfolio, Microsoft and a few others. They are playing bluff with as I see it, the wrong players and when others are ignoring the warnings of the Bank of England they merely get what is coming for them and it is a game I do not approve of, because it is based on the bluff that gets us ‘we are too big to fail’ and I do not agree, but they will say that it is all based on retirement numbers and other ‘needly’ things. This is why America needs Canada to become the 51st state so desperately, they are (as I personally see it) ready to use whatever troll army they have to smear Canada. But I am not having it and as I see “Dubai’s bold target to attract 10,000 artificial intelligence firms by 2030 is evolving from vision to execution, signaling a new phase in the emirate’s transformation into a global technology powerhouse. As a follow-up to earlier announcements positioning the UAE as the “Startup Capital of the World,” recent developments in AI infrastructure, capital inflows, and global partnerships show how this goal is being operationalised — potentially reshaping Dubai’s economic structure and reinforcing its competitive edge in the global digital economy.” I believe that those behind this are having the best interests at heard for the Emirati, but I do not trust the people behind this drive (outside of the UAE). I believe that this bubble will burst after the funds are met with smiles only for these people to go out of business with a bulky severance check. It is almost like the role Ryan Gosling played in the Big Short where Jared Vennett receives a bonus of $47 million for profits made on his CDSs. It feels almost too alike. And I feel I have to speak up. Now, if someone can sink my logic, I am fine with that, but let those running to this future verify whatever they have and not merely accept what is said. I am happy to be wrong but the setting feels off (by a lot) and I rather be wrong then be silent on this, because as I see it, when there is a ‘market correction’ of $2,000,000,000,000 you can consider yourself sold down the river because there is a cost of such a correction and it should 100% be on the American shores and 0% of the Arabic, Commonwealth or European shores. But that is merely my short sighted view on the matter. 

So when we get to “Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, said the goal reflects the UAE’s determination to lead globally in frontier technology. “Dubai’s target to attract 10,000 AI companies over the next five years is not a dream — it is a commitment to building the world’s most dynamic and future-ready digital economy,” he said. “We already host more than 1,500 pure AI companies — the highest number in the region — but this is just the beginning. Our strategy is to bring in creators and producers of technology, not just users. That’s how we sustain competitiveness and shape the industries of tomorrow.”” I am slightly worried, because there is an impact of these 1,500 companies. Now, be warned there are plenty of great applications of DML and LLM and these firms should be protected. But the setting of 10,000 AI companies worry me, as AI doesn’t yet exist and the stage for Agentic programming is clear and certain. I would like to rephrase this into “We should keep a clear line of achievements in what is referred to as AI and what AI companies are supposed to see as clear achievements” This requires explanation as I see whatever is called as AI as NIP (Near Intelligent Parsing) and that is currently the impact of DML and LLM and I have seen several good projects but that is set onto a stage that has a definite pipeline of achievements and interests parties. And for the most the threshold is a curve of verifiable data. That data is scrutinized to a larger degree and tends to be (at times) based on the first legacy data. It still requires cleaning but to a smaller degree to dat that comes from wherever. 

So do not dissuade from your plans to enter the AI field, but be clear about what it is based on and particularly the data that is being used. So have a great day and as we get to my lunch time there is ample space for that now. Enjoy your day.

1 Comment

Filed under Finance, IT, Politics, Science

Just like Soap

Perhaps you remember the 80’s series soap. Someone made a sitcom of the most hilarious settings and took it up a notch, the series was called soap and people loved it, it did nearly everything right, but over time this bubble went, just like all the other soap bubbles tend to go and that is OK, the made their mark and we felt fine. There is another bubble. It is not as good. There is the mortgage bubble, the housing bubble (they were not the same), the economy bubble and all these bubbles come with an aftermath. Now we see the AI bubble and I predicted this as early as January 29th of this year in ‘And the bubble said ‘Bang’’ (at https://lawlordtobe.com/2025/01/29/and-the-bubble-said-bang/) and my setting is that AI does not yet exist, as I saw it, for the most, it is the construct of lazy salespeople who couldn’t be bothered to do their work and created the AI ‘Fab’ and hauled it over to fit their needs. Let’s be clear. There is no AI and when I use it I know that ‘the best’ I am doing is avoid a long discussion about how great DML and LLM are, because they are and it is amazing. And as these settings are correctly used, it will create millions if not billions in revenue. I got the idea to overhaul the Amazon system and let them optionally create online panels that could bank them billions, which I did in ‘Under Conceptual Construction’ (at https://lawlordtobe.com/2025/10/10/under-conceptual-construction/) and ‘Prolonging the idea’ (at https://lawlordtobe.com/2025/10/12/prolonging-the-idea/) which I wrote yesterday (almost 16 hours ago). I also gave light to an amazing lost and found idea which would cater to the needs of Airports and bus terminals. I saw that presentation and it was an amazing setting in what I still call NIP (Near Intelligent Parsing) in ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) these are mere settings and they could be market changes. This is the proper use of IT to the next setting of automation. But the underlying bubble still exists, I merely don’t feed that beast, so when the BBC last night gave us all ‘‘It’s going to be really bad’: Fears over AI bubble bursting grow in Silicon Valley’ almost 2 days ago (at https://www.bbc.com/news/articles/cz69qy760weo) I saw the sparkly setting of soap bubbles erupt and I thought ‘That did not take long’. My setting was that AI (the real AI as Alan Turing saw it) was not ready yet. The small setting that at least three parts in IT did not yet exist. There is the true power of Quantum computing and as I see it quantum computers are real, but they are in the early stages of development and are not yet as powerful as future versions should be and for that, so as IBM rolls out their second system on the IBM Heron platform, we are getting there. It is called the IBM’s 156-qubit IBM Quantum Heron, just don’t get your hopes up, not too many can afford that platform. IBM keels it modes and gives us that “The computer, called Starling, is set to launch by 2029. The quantum computer will reside in IBM’s new quantum data center in upstate New York and is expected to perform 20,000 more operations than today’s quantum computers” I am not holding me credit card to account to that beauty. If at all possible, the only two people on the planet that can afford that setting are Elon Musk and Larry Ellison and Larry might buy it to see Oracle power at actual quantum speed and he will do it, to see quantum speed came to him in his lifetime. The man is 81 after all (so, he is no longer a teenager), If I had that kind of money (250,000 million) I would do it to, just so to see what this world has achieved. But the article (the BBC one) gives us ““I know it’s tempting to write the bubble story,” Mr Altman told me as he sat flanked by his top lieutenants. “In fact, there are many parts of AI that I think are kind of bubbly right now.”

In Silicon Valley, the debate over whether AI companies are overvalued has taken on a new urgency. Skeptics are privately – and some now publicly – asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call “financial engineering”.” And the BBC is not wrong, we had a write-off in January of a trillion dollars and a few days ago another one of 1.5 trillion dollars. I would be willing to call that ‘Financial Engineering’ and that rapid rise? Call it the greedy need of salespeople getting their audience in a frenzy 

I merely gave a few examples of what DML and LLM could achieve and getting a lost and found department set from weeks into minutes is quite the achievement and I reckon that places like JFK, Heathrow and Dubai Airport would jump at the chance to arrange a better lost and found department and they are not alone but one has to wonder how the market can write off trillions in merely two events. So when we get to

He is not wrong. Consider the next one amounting to a speculated two trillion (or $2,000,000,000,000) when it hits, it could wipe out retirement savings of nearly everyone for years. So how do you feel about your retirement being written off for decades? When you are 80+ and you have millions upon millions you are just fine and that is merely 2-5 people, the other 8,200,000,000 people? The young will be fine, and over 4 billion will be too young to care about their retirement, but the rest? Good luck I say.

So what will happen to Stargate ($500B) when that bubble goes? I already see it as a failure as the required power settings will not be able to fuel this, apart from the need of hundreds of validators and their systems require power too, then we see Microsoft thinking (and telling us) it is the next big thing, all whilst basic settings aren’t out yet. Did anyone see the need for Shallow Circuits? Or the applied versions of Leon Lederman? No one realizes that he held the foundational setting of AI in Quantum computing. You see (as I personally see it) AI cannot really work in Binary technology, it requires a trinary setting, a simple stage of True, False and Both. It would allow for trinary settings, because it isn’t always True or False, we learn that the hard way, but in IT we accept it. That setting will come to blow when we get to the real AI part of it and that is why I (in part) the AI coffee being served in all places. And I like my sarcasm really hot (with two raw sugar and full cream milk)

That is the setting we face and whilst some will call the BBC article ‘doom speak’ I see it for what it is, a reminder that the AI frenzy is sales driven and whilst people are eager to forget the simplest setting, the real deal of Microsoft and Builder.AI is simply the setting that at present we are confronted with IT engineers making the decisions for us and the amount of class actions coming to the world in 2027 and 2028 (optionally as early as 2026) and as some cases are drawn out even yesterday (see https://authorsguild.org/news/ai-class-action-lawsuits/ for details) you need to realise that this bubble was orchestrated and as such I like the term ‘Financial Engineering’ so be good and use the NIP setting properly and feel free to be creative, I was and gave Amazon an idea that could bank it billions. But not all ideas are golden and I am willing to see that I am not the carrier of golden ideas, the fact that someone saw the Lost and Found setting is proof of that.

Have a great day, I am 30 minutes from breakfast now, so off I go to brekkyville.

Leave a comment

Filed under Finance, IT, Media, Science

The dams are cracking

Yes, that is the setting I saw coming, but there is always ‘space’ for interpretation and at present we see two stories that seem to illustrate this. The first one is given by the BBC (at https://www.bbc.com/news/articles/cly17834524o0 where we see ‘Tech billionaires seem to be doom prepping. Should we all be worried?’ It is a question to have, but what does the article ‘bare’ out? It is not that basic or simple. First we are given “Mark Zuckerberg is said to have started work on Koolau Ranch, his sprawling 1,400-acre compound on the Hawaiian island of Kauai, as far back as 2014.” So, he had 11 years? Seems like overly ‘doom prepping to me’ (is this sarcasm or satire?) The additional setting is “The underground space spanning some 5,000 square feet is, he explained, “just like a little shelter, it’s like a basement”” which seems like the average floor of a mall to me. I think that when the ‘basement’ extends well beyond 1000 Sqft, we can ignore the ‘basement’ label and whatever it is, it is his to do. He might be buying up vats of wine or Cognac, whatever it is. It will be his setting. Then we are given “his decision to buy 11 properties in the Crescent Park neighbourhood of Palo Alto in California, apparently adding a 7,000 square feet underground space beneath.” So here again we get the ‘speculating’ media for the setting of a story. So he might have bought the 11 properties, but what happened to them? What evidence is there? He could have bought this for his nearest and dearest. There are many options. Then we get more ‘famous’ names and locations like New Zealand come up. Yet about halfway we get a clarion call (as the expression goes), we are given “Neil Lawrence is a professor of machine learning at Cambridge University. To him, this whole debate in itself is nonsense. “The notion of Artificial General Intelligence is as absurd as the notion of an ‘Artificial General Vehicle’,” he argues. “The right vehicle is dependent on the context. I used an Airbus A350 to fly to Kenya, I use a car to get to the university each day, I walk to the cafeteria… There’s no vehicle that could ever do all of this.” For him, talk about AGI is a distraction.” And as far as I can tell, I feel like Neil Lawrence does with an addendum, and ad the very end we are given ““LLMs also do not have meta-cognition, which means they don’t quite know what they know. Humans seem to have an introspective capacity, sometimes referred to as consciousness, that allows them to know what they know.” It is a fundamental part of human intelligence – and one that is yet to be replicated in a lab.” And it is part of what I have been saying all along. And we get the larger setting from a second source. It is SBS (at https://www.sbs.com.au/news/article/australians-living-in-america-anxiety/p88o60wos) that give us ‘Saving money and packing ‘go bags’: How Australians in the US are preparing for the worst’ where we see “But she says the attitude towards foreign nationals under the current administration has made life in the US feel “scary”. Kate says these fears were brought to the surface during her green card interview. “They grilled me in the interview and asked me questions not even related to our marriage but about my previous visa and time in the US,” she says.” As well as “Many Australians living in the US are reporting experiencing high levels of anxiety and feelings of instability due to the possibility of rapid political change under US President Donald Trump.

These are the settings that matter. In the first there is the BBC article that is making the ‘doom lecture’ but that is not the setting. When AI collapses like a near empty shell, people will all be tuning for their incomes and playing the blame game, but as we are given ‘Wall Street crashes after Trump announces 100% tariffs on China; $1.5 trillion wiped out’ consider what happens when all these AI ‘vendors’ fall flat, the damage will be more than 10 times worse, America loses 15 trillion. Can you even fathom that kind of loss? That will be the sounding implosion that leads to civil war when 90% of 340 million people lose whatever they had, retirements wiped out, other savings gone, they will get angry. President Trump will have to run for his life to air-force one as quick as his legs can carry him. Evading to Russia or anyone that will have him and his billions? Mostly gone, if not already abroad. Those who bought large mansions outside of the US are likely safe for two generations in France, Monaco, UAE, Bermuda, New Zealand, you name it, some will evade and this is the setting we see. I reckon that people in California will need high walls to keep others out, optionally armed defenses as well. 

Foreigners are now seeing the scary reality they signed on for and they are getting ready a ‘go bag’ to evade to wherever they can as quickly they can. Is this doom speak?

That is a valid question. You see, the AI setting is merely one, President trump soured the waters on tourism which is down in many ways and no reflective view is given by anyone in media. That amount of bad news they find likely ‘irresponsible’ and the media has no business using that excuse as they have been one of the most irresponsible parties ever. Then foreign retail. Canada pulled all the alcoholic beverages from the shelves in Canada. How much is that costing? One source (Source: Global News) gives us that the decline is 85%, that amounts to how much? These three settings is almost a certainty of recession and there is a lot more declines in the papers but the media will not give you the proper numbers. Several sources all giving different partially overlapping numbers. As such the economic dams of America are cracking. And they will lose a massive amount of revenue and while some will give some of the numbers. Most of us aren’t given the full view. I have some of the views as I have been keeping an eye on some of the numbers. But even I do not have the full view. So whilst some give us “The sell-off erased more than USD 1.5 trillion in market value from US stocks. Meanwhile, the cryptocurrency market faced record liquidations of USD 19 billion. This is the largest single-day figure ever recorded.” The part no one talks about is where are the billionaires set at? We see the wins of Elon Musk and Larry Ellison, but where are the other billionaires? How are they doing? And that disjointed Microsoft view.

Why the Windows maker?
That is a fair question. You see, they were all ‘heralding’ how good they were doing, but the shimmer in the shadows is different. We are given “Microsoft is currently losing money on AI development, having spent an estimated $19 billion in one quarter on AI infrastructure, with no significant revenue from it yet. The company also experienced a reported loss of $300 million in Call of Duty sales due to the Game Pass subscription model” all whilst Activision and Bethesda was bought for over $100,000,000,000 and that has an interest setting. They might be ‘offloading’ staff (over 9,000 according to some numbers) and whilst they and Adecco (firing into the thousands) are all set to AI, there is a hidden snag. When this falls short they will face a setting that is a lot more dangerous. People will not consider them in the future. So when the non-existing AI is set to the need of engineers it goes flat and when there is no one around (an exaggeration) to program your LLM, consider where your firm will be. ZDNet gave us “Microsoft’s CEO loves to talk about ’empathy.’ But everything that is coming out of Redmond these days is perilously close to turning the company into the Borg.” Basically a non-existent setting of people that cannot live in a vacuum and that is an additional side I never saw coming. I was focussed on Microsoft turning into an empty shell and when the substance is gone, the shell collapses. That is what I saw in Microsoft Games and Microsoft Office. It started in 2012 when their service devisions were no longer up to scrap and when support goes, so does sales and when we consider the over 100 billion for two companies its, whilst they weren’t making enough to even afford the interest on that, the picture of failure starts to evolve into a nightmare setting and sacking 9,000 people will not safe it. They are telling us now that AI is the future, but at present it does not exist and what does exist requires engineers (remember Builder dot AI?) It is a fictive setting that is showing up all over America and the ‘import’ people are seeing the cracks evolve and they want out as fast as they can. Which is good news for Aramco and ADNOC as they now get the choice of the litter, but for America it is bad news. So there is no doom speak. It is the returning story of a country who think it is too big to go bankrupt. I heard that story before (SNS Bank for one) then a few more banks and they are all part of something else. And America? Parts of America could be added to Canada and Mexico would be relieved to get Texas (the latter part is speculation) and that is the dangerous reality that others are facing. The question is what does it take to throw this around and whilst Wall Street is in denial. Others, those who can afford it, will be making a new household out of American clutches (like the non-tax countries mentioned earlier) also Saudi Arabia becomes an option, but the is reserved for the chosen few (and American Muslims of course). 

So am I delusional or do I have a point? I reckon that one of the larger issues (still setting) is how America deals with Alex Jones. Because if he gets his ‘blockage’ Americans will go insane, they will not accept that this Conspiracy theorist is allowed his fortune after he went after dead children (saying they were actors, who were not dead according to sources). I wonder where that will go, because as I see it, it will be the tinder spark America will be set on fire. At that point all bets are off and I reckon that most ‘New-Americans’ will run to the nearest airport. This might merely be my speculation and optionally a wrong one. But that is how I see it.

Beyond that, the losses that America is having and when all the numbers come out, the second stage is reached and whomever thought they had a retirement, they will all try to collect on whatever possible. 

It is a hard setting and I hope I am wring, because this collapse will fall over Japan and Europe pretty much soon thereafter. Connected currencies will take a massive tumble.

Have a great day, if that is presently at all possible. 

Leave a comment

Filed under Finance, Gaming, IT, Media, Politics, Science, Tourism

Focal points required

That is the setting I am having in 1 o’clock in the morning. The news (and the internet) is currently overloading with Jimmy Kimmel stories as well as vindictive settings against Disney and I get it. When the media who is trumpeting free speech is becoming the bitch of President Trump, people will not take kindly to this. Apparently the subscription servers at Disney went down as it was overloaded with cancellations (according to some sources). So I had to look all over the place on the settings of finding something to write about and Tom’s Hardware was one source who supplied the goods. The story (at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-announces-worlds-most-powerful-ai-data-center-315-acre-site-to-house-hundreds-of-thousands-of-nvidia-gpus-and-enough-fiber-to-circle-the-earth-4-5-times) gives us ‘Microsoft announces ‘world’s most powerful’ AI data center — 315-acre site to house ‘hundreds of thousands’ of Nvidia GPUs and enough fiber to circle the Earth 4.5 times’ and even as I don’t care too much about what happens in Wisconsin (other than the need to protect cheeses, I really like cheese) is the fact that when I see an article with that much data, I start looking for missing data, I am wired that way and it is less than 4.5 times around the planet.

But we got something, the setting is given with “This is likely a comparison to xAI’s Colossus, which uses over 200,000 GPUs and 300 megawatts of power. Microsoft didn’t specify its exact number of GPUs nor the expected power consumption.” And that is the ball game. You see, the setting of 300MW is not just a lot, it is the entire ballgame. Now, there is evidently enough power in Wisconsin, but is it enough? Consider a simple PC. It has a 600W power supply. Now this is not the same, but I am getting to that. Take 200 PC’s, that makes it 120,000 Watts of energy. Now consider that hundreds of PC’s are needed to even partially validate the data coming into that place. You need data verification spots to do that. The larger setting could be done by data entry people, people who go over the received data and they need to work quick, almost uninterrupted. As such the quote “Microsoft didn’t specify its exact number of GPUs nor the expected power consumption” is as I personally see it, massively deceptive. Just like the stage of Builder.ai where Microsoft set it to over a billion dollars and in months that money was gone, they apparently spend it on under 200 programmers (test engineers) and that is merely the start of it. And when we talk about enough fibre to circumvent the planet 4.5 times you get 57,402 km of fibre won’t that take any energy? The numbers aren’t adding up and even as Wisconsin has energy, there is every likelihood that they ‘suddenly’ have a shortage of energy. Oh, what a damn shame and the setting of any data centre is that in case of a shortage of energy it all ends right quick, the moment the surplus hits zero, the issues start and they will immediately escalate. 

Further down that page we see the mention of Elon Musk: “Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes”, well good luck with that idea. I am not saying it is impossible, but the setting of getting that all placed in a new location still requires a lot of concrete and not to mention the stage of the resources to get the plant going, so what is it? Gas, oil, coal, Uranium?

So what is fueling the Microsoft plant? And how much surplus energy will Wisconsin have left at that point? As I see it, there is a reason that Microsoft doesn’t give out the expected power consumption. And there are a few more items on that list, like validators (could be done remotely) so hundreds of people calling into that centre what drives the telecom settings? All issues that would have to be tackled on day one. 

As I see it, there is a lack of focal points, but as I see it, those who spin aren’t interested in that concept at all. Merely the floatation of the name in conjunction with “‘world’s most powerful’ AI data center”, didn’t Microsoft do this once before? Oh yes, the most powerful console in the world. How did that end with that Xbox series X? As far as I know it is trailing the weakest console (Nintendo Switch) by a lot and it is also trailing the PlayStation 5 a fair bit. So I am not keeping my hope up when Microsoft is juggling the setting “World’s most powerful…anything

But then I have seen them play these cards for almost 40 years. And they could have taken advice from IBM on certain matters, like “This page is intentionally kept blank

But that is just me.

The second setting is being pushed forward. I don’t want to write the wring thing and there are a few missing cogs in that story. Like the ‘new’ location on $4,300 billion retirement funds. And no one is talking so I have to dig.

Well, have a great day, time for Sunday to get a sun (in 4 hours) and consider looking around for freedom of speech, Disney seemingly can’t find it. 

Leave a comment

Filed under Finance, IT, Media, Science

The massive problem with AI

Yes, I have said that on several occasions, there is no AI and whatever there is has verification issues. Today I illustrate this YET again and here in this case Google is as much to blame as many others.

So we have two images, the first one gives us 

That there are risks. I was taken a little back, The UAE is one of the safest places on the planet. So I decided to ask the same question a little different and I added the term “in 2025” so as we see the second setting

We see the initial feeling I had about the country. And there are an abundance of articles showing the safety of the UAE (and Abu Dhabi), as such I want to kindly wake Sergey Bring the fuck up and I am wondering whether he needs to address his Gemini settings a little. Perhaps American tourism decline settings is altering the verification settings?

As such there is one little situation, the setting that whatever bigtech calls AI cannot be trusted (which I already knew). The setting of verification that is up and about and that is the major handle in whatever that (AI) is. We need to realise that there is no AI. There is DML (Deeper Machine Learning) and there is LLM (Large Language Models) and they are awesome, but they are depending on the programmers you throw at them and it is not foolproof, there are issues (as you can see). 

This is not a large article. I have said it before and now within 5 minutes I had the setting I needed. I reckon that all of you want to make a separate ‘judgment’ on whatever these people call AI and whether it might show your local environment in a limelight you could check. And just for fun (I tend to be a whacky person) I am adding the ‘American Tourism decline’ here too.

Just to set the premise, consider that this was given 4 weeks ago: “In June, Canadian residents returned from 2.1 million trips to the United States, representing a 28.7% decrease from the same month in 2024 and accounting for 70.8% of all trips abroad taken by Canadian residents in June 2025.” And the story here becomes verification. You see, who (or what) is feeding the AI models? When the data cannot be verified, how is the data conceived? Because this data is fed, by whom becomes the story and the media (as a whole) becomes less and less reliable. 

Have a great day, almost time for me to take a walk towards my brekky.

Leave a comment

Filed under IT, Media, Politics, Tourism

By German standards

That is at time the saying, it isn’t always ‘meant’ in a positive sight and it is for you to decide what it is now. The Deutsche Welle gave me yesterday an article that made me pause. It was in part what I have been saying all along. This doesn’t mean it is therefor true, but I feel that the tone of the article matches my settings. The article (at https://www.dw.com/en/german-police-expands-use-of-palantir-surveillance-software/a-73497117) giving us ‘German police expands use of Palantir surveillance software’ doesn’t seem too interesting for anyone but the local population in Germany. But that would be erroneous. You see, if this works in Germany other nations will be eager to step in. I reckon that The Dutch police might be hopping to get involved from the earliest notion. The British and a few others will see the benefit. Yet, what am I referring to?

It sounds that there is more and there is. The article’s byline gives us the goods. The quote is “Police and spy agencies are keen to combat criminality and terrorism with artificial intelligence. But critics say the CIA-funded Palantir surveillance software enables “predictive policing.”” It is the second part that gives the goods. “predictive policing” is the term used here and it supports my thoughts from the very beginning (at least 2 years ago). You see, AI doesn’t exist. What there is (DML and LLM) are tools, really good tools, but it isn’t AI. And it is the setting of ‘predictive’ that takes the cake. You see, at present AI cannot make real jumps, cannot think things through. It is ‘hindered’ by the data it has and that is why at present its track record is not that great. And there are elements all out there, there is the famous Australian case where “Australian lawyer caught using ChatGPT filed court documents referencing ‘non-existent’ cases” there is the simple setting where an actor was claimed to have been in a movie before he was born and the lists goes on. You see, AI is novel, new and players can use AI towards the blame game. With DML the blame goes to the programmer. And as I personally see “predictive policing” is the simple setting that any reference is made when it has already happened. In layman’s terms. Get a bank robber trained in grand theft auto, the AI will not see him as he has never done this. The AI goes looking in the wrong corner of the database and it will not find anything. It is likely he can only get away with this once and the AI in the meantime will accuse any GTA persona that fits the description. 

So why this?
The simple truth is that the Palantir solution will safe resources and that is in play. Police forces all over Europe are stretched thin and they (almost desperately) need this solution. It comes with a hidden setting that all data requires verification. DW also gives us “The hacker association Chaos Computer Club supports the constitutional complaint against Bavaria. Its spokesperson, Constanze Kurz, spoke of a “Palantir dragnet investigation” in which police were linking separately stored data for very different purposes than those originally intended.” I cannot disagree (mainly because I don’t know enough) but it seems correct. This doesn’t mean that it is wrong, but there are issues with verification and with the stage of how the data was acquired. Acquired data doesn’t mean wrong data, but it does leave the user with optional wrong connections to what the data is seeing and what the sight is based on. This requires a little explanation.

Lets take two examples
In example one we have a peoples database and phone records. They can be matched so that we have links.

Here we have a customer database. It is a cumulative phonebook. All the numbers from when Herr Gothenburg got his fixed line connection with the first phone provider until today, as such we have multiple entries for every person, in addition to this is the second setting that their mobiles are also registered. As such the first person moved at some point and he either has two mobiles, or he changed mobile provider. The second person has two entries (seemingly all the same) and person moved to another address and as such he got a new fixed line and he has one mobile. It seems straight forward, but there is a snag (there always is). The snag is that entry errors are made and there is no real verification, this is implied with customer 2, the other option is that this was a woman and she got married, as such she had a name change and that is not shown here. The additional issue is that Müller (miller), is shared by around 700,000 people in Germany. So there is a likelihood that wrongly matched names are found in that database. The larger issue is that these lists are mainly ‘human’ checked and as such they will have errors. Something as simple as a phonebook will have its issues. 

Then we get the second database which is a list of fixed line connections, the place where they are connected and which provider. So we get additional errors introduced for example, customer 2 is seemingly assumed to be a woman who got married and had her name changed. When was that, in addition there is a location change, something that the first database does not support as well as she changed her fixed line to another provider. So we have 5 issues in this small list and this is merely from 8 connected records. Now, DML can be programmed to see through most of this and that is fine. DML is awesome. But consider what some called AI and it is done on unverified (read: error prone) records. It becomes a mess really fast and it will lead to wrong connections and optionally innocent people will suddenly get a request to ‘correct’ what was never correctly interpreted. 

As such we get a darker taint of “predictive policing” and the term that will come to all is “Guilty until proven innocent” a term we never accepted and one that comes with hidden flaws all over the field. Constanze Kurz makes a few additional setting, settings which I can understand, but also hindered with my lack of localised knowledge. In addition we are given “One of these was the attack on the Israeli consulate in Munich in September 2024. The deputy chairman of the Police Union, Alexander Poitz, explained that automated data analysis made it possible to identify certain perpetrators’ movements and provide officers with accurate conclusions about their planned actions.” It is possible and likely that this happens and there are intentional settings that will aide, optionally a lot quicker than not using Palantir. And Palantir can crunch data 24:7 that is the hidden gem in this. I personally fear that unless an accent to verification is made, the danger becomes that this solution becomes a lot less reliable. On the other hand data can be crushed whilst the police force is snoring the darkness away and they get a fresh start with results in their inbox. There is no doubt that this is the gain for the local police force and that is good (to some degree). As long as everyone accepts and realizes that “predictive policing” comes with soft spots and unverifiable problems and I merely am looking at the easiest setting. Add car rental data with errors from handwritings and you have a much larger problem. Add the risk of a stolen or forged drivers license and “predictive policing” becomes the achilles heel that the police wasn’t ready for and with that this solution will give the wrong connections, or worse not give any connection at all. Still, Palantir is likely to be a solution, if it is properly aligned with its strengths and weaknesses. As I personally see it, this is one setting where the SWOT solution applies. Strengths, Weaknesses, Opportunities, and Threats are the settings any Palantir solution needs and as I personally see it, Weakness and Threats require its own scenario in assessing. Politicians are likely to focus on Strength and Opportunity and diminish the danger that these other two elements bring. Even as DW gives us “an appeal for politicians to stop the use of the software in Germany was signed by more than 264,000 people within a week, as of July 30.” Yet if 225,000 of these signatures are ‘career criminals’ Germany is nowhere at present. 

Have a great day. People in Vancouver are starting their Tuesday breakfast and I am now a mere 25 minutes from Wednesday.

Leave a comment

Filed under IT, Law, Media, Politics, Science

Two for two

That is the setting that I see overlapping. Now, if someone states that they have nothing to do with each other, I would disagree, but I see their point too. At times causality is as thin as the thread to a spiderweb. I just see that there is more then one thread connecting the two together. And those who disagree are allowed to do this. So it started with Kazinform International News Agency (a news agency in Kazakhstan) informing me of ‘Saudi Arabia retains top spot in MENA venture capital investment for first half of 2025’, in itself not terribly important to my scope of life, but it had mention of the MAGNiTT. I had not heard that term before and I get a lot of information, so I decided to check it out. It states “your go-to platform for verified Venture Capital & Private Equity data in Middle East, Africa, Southeast Asia, Türkiye and Pakistan” that I would have remembered, as such a new term came to me, from an unknown source. The part that got my intention was “Saudi Arabia maintained its first rank across MENA in terms of Venture Capital (VC) funding in the first half of 2025, witnessing a total VC deployment of $860 Million (SAR3.2 billion), surpassing the total VC funding of 2024 (full year)” as such, I am getting the impression that Saudi Arabia is stretching its financial influence in the world, when you see a near two for one deal spanning almost a billion, that ain’t hay (as the expression goes). 

The additional quote goes “The Kingdom’s leading position in the VC scene in the region comes as a result of many governmental initiatives launched to stimulate the VC and startups ecosystem within the Saudi Vision 2030 programs. We at SVC are committed to continuing to lead the development of the ecosystem by stimulating private investors to provide support for startups and SMEs to be capable of fast and high growth, leading to diversifying the national economy and achieving the goals of the Saudi Vision 2030, CEO and Board Member at Saudi Venture Capital (SVC) Dr. Nabeel Koshak commented.” As such there is a lot to be said for being thorough and Saudi Arabia isn’t tinkering on the corner. Now considering that I didn’t get that news from the Financial Times or Reuters, I had an issue with this. So, consider that it is missing from the Financial Times, a said to be thorough news agency for all matters linked to the channel of a “Ka-Ching” nature. 

This is setting the second phase of the issue being a (what some call) AI setting. You see, I was looking as American Tourism (a daily event) as I keep my eyes on this. Here we see “Tourism in the United States is experiencing a decline in international visitor spending, with a projected $12.5 billion drop in 2025. This downturn is attributed to a combination of factors, including perceived negative impacts from Trump administration policies related to trade and borders, a strong dollar, and weaker global economic growth. While domestic tourism remains strong, the US is seeing fewer international tourists compared to other countries, and some experts predict it may not return to pre-pandemic levels until 2030.” (Source: claimed AI) what connects this is Forbes giving us ‘U.S. tourism will lose up to $29 billion as visitors plummet amid Trump policies’ a mere week ago (at https://www.forbes.com.au/life/travel/u-s-tourism-will-lose-up-to-29-billion-as-visitors-plummet-amid-trump-policies/) a mere week ago. So is this (non) AI a mere 240% off? You see, one part is the “strong dollar” but sources give me “the United States Dollar has strengthened 0.62%, but it’s down by 5.38% over the last 12 months.” As such the second part came to me. Can these sources which I define as NIP (Near Intelligent Parsing) be given programmed issues that as not taken into consideration? And that thought gets strengthened through “While domestic tourism remains strong, the US is seeing fewer international tourists compared to other countries, and some experts predict it may not return to pre-pandemic levels until 2030”, the issue is that the term before directly clashes with the Forbes quote, which is “the U.S. is a notable loser this year as tens of millions of international visitors are choosing to travel elsewhere—costing the economy up to $29 billion—and risking millions of jobs” and there is data supporting the Forbes view. I am also considering that Forbes might have missed a setting or two. The amount of bed and breakfast places that will lose close to everything as tourists stay away. Florida who just expanded is seeing less tourists from both Canada and overseas tourists. The Trump administration has made America less interesting in 2025 and likely 2026 as well. That and as we now see that Saudi Arabia, Europe, Canada and the UAE are cashing in on that negativity is giving a much larger confidence in the losses that Forbes predict. 

So, how are they connected?
There is a larger setting to the folly of NIP (or what some call AI), you see NIP is based on DML and that only works on predicted data that has occurred and the setting America faces, other has never faced before and certainly not in this global economy where preparation is king. Last month, merely one travel agent is giving us ‘Flight Centre is facing a $100m hit as a result’, that is merely one travel agent and some sources give us that there are an expected 571,541 operating in 2025. So how many losses will America face? It is the groundling of questions, because that also gives us the amount of Venture Capitalists that are turning towards Saudi Arabia and the UAE (to name but two). This matters as it explains why Saudi Arabia it self is leading the charge. Wouldn’t you turn to your own borders to cash in on ventures happening before 2030? So as we saw “some experts predict it may not return to pre-pandemic levels until 2030” and this is happening around that same time. With the Trump administration giving folly at nearly every corner, I wouldn’t put my money there, I would feel a lot more secure putting it in Canada to say the least. 

Kazinform gave me the setting that is playing now. Through these links there is a thought that the internet and its habitants are being spanned to through what some call AI (which it is not) by engineering markers that are ‘managed’ through some forces as to what constitutes NIP at best. Deeper Machine Learning (DML) even with LLM (Larger Language Machine) in place can only work with what is, what it has ad the world has never been given these markers of folly before. As such DML is kinda useless. They can pretend the core remains the same, but everything that this core fuels is off (by a lot) and that is setting the fake premise that it can never keep. And the end of the Kazinform story is pretty much the best, it gives us “As reported previously, Saudi Arabia ranked first globally in growth of international tourism receipts in Q1 of 2025 compared to Q1 of 2019, according to the World Tourism Barometer published by UN Tourism in May.” That makes sense as the people are turning away from America in tourism and Saudi Arabia has worked hard to buff up on being the next tourism spot to be. People tend to forget that 20% of the world is Muslim and they are done with the world treating them as a second best option. Taking into account that Saudi Arabia is growing in the tourism direction as well as all the NEOM projects completing one by one. So when winter sport season comes near, do you really want to go to America at the present setting, or will it become Mt. Whistler (BC, Canada) or Trojena (Saudi Arabia)? The choices are tough, I get it, but with the waiting lines at Mt. Whistler I wouldn’t be surprised if Trojena will have its first year with numerous Canadians there. As some say, Aspen is so passé. And that is merely one reason why Saudi Arabia will grown into a new tourism behemoth. All that before we get to actually see Aquellum, which could be a global first, a community where the architecture is inward set. I cannot give credence to any of that, but if Saudi Arabia pulls it off, it will become the next world wonder and it will show Saudi Arabia to be the next powerhouse in the world with the bulk of the Muslims world wanting to live and grow there. 20% of the population of the planet seeking growth is not to be underestimated and that is before other realise that the bulk of eager Americans want a piece of that life too. All elements in what the next decade is shaping up to be and that is the setting that neither AI (or NIP for that matter) saw coming, because the current settings are all given to us be engineers (remember builder.ai). It doesn’t adjust for something never done before and that is where the hard parts come around the corner, there is no AI (at present).

So feel free to see me as incorrect, that is fine. But also adjust your views to views currently not given and there is an overlap of matters. What is and is filtered away for reasons ‘unknown’ and what is not given to us because some cannot see the impact. It is a two for two setting.

Have a great day, I entered the middle of the week, it is still yesterday lunchtime in Vancouver.

Leave a comment

Filed under Finance, IT, Media, Politics, Tourism

Speculating on language

That was the setting I found myself in. There is the specific on an actual AI language, not the ones we have, but the one we need to create. You see, we might be getting close to trinary chips. You see, as I personally see it, there is no AI as the settings aren’t ready for it (I’ve told that before), but we might be getting close to it as the Dutch physicist has had a decade to set the premise of the proven Epsilon particle to a more robust setting and it has been a decade (or close to it) and that sets the larger premise that an actual AI might become a reality (were still at least a decade away), but in that setting we need to reconsider the programming language. 

BinaryTrinary
NULLNULL
TRUETRUE
FALSEFALSE

BOTH

We are in a binary digital world at present and it has served our purpose, but for an actual AI it does not suffice. You can believe the wannabe’s going on about we can do this, we can do that and it will come up short. Wannabe’s who will hide behind data tables in data tables solutions and for the most (as far as I saw it) only Oracle ever got that setting to work correctly. The rest merely grazes on that premise. You see, to explain this in the simplest of ways. Any intelligence doesn’t hide behind black or white. It is a malleable setting of grey, as such both colors are required and that is where Trinary systems with both true and false activated will create the setting an AI needs. When you realise this, you see the bungles the business world needs to hide behind. They will sell these programmers (or engineers) down the drain at a moments notice (they will refer to it as corporate restructuring) and that will put thousands out of a job and the largest data providers in class action suits from start to up the wazoo. 

When you see what I figured out a decade ago, the entire “AI” field is driven to nothing short of collapse. 

My mind kept it in the back of my mind and it worked on the solutions it had figured out. So as I see it something like C#+ is required. An extended version of C# with LISP libraries (the IBM version) as the only one I also had was a Borland program and I don’t think it will make the grade. As I personally see it (with my lack of knowledge) is that LISP might be a better fit to connect to C#. You see, this is the next step. As I see it ‘upgrading’ C# is one setting, but LISP has the connectors required to make it work and why reinvent the wheel? And when the greedy salespeople figure out what they missed over the last decade (the larger part of it) they will come with statements that it was a work in progress and that they are still addressing certain items. Weird, I got there a decade ago and they didn’t think I was the right material. As such you can file their versions in a folder called ‘What makes the grass grow in Texas?’ (Me having a silly grin now). I still haven’t figured it all out, but with the trinary chip we will be on the verge of getting an actual AI working. Alas, the chip comes long after we bid farewell to Alan Turing as he would have been delighted to see that moment happen. The setting of gradual verification, a setting of data getting verified on the fly will be the next best thing and when the processor gives us grey scales that matter, we will see that contemplated ideas that will drive any actual AI system forward. It will not be pretty at the start. I reckon that IBM, Google and Amazon will drive this And there is a chance that they all will unite with Adobe to make new strides. You think I am kidding, but I am not. You see, I refer to greyscales on purpose. The setting of true and false is only partially true. The combination of the approach of BOTH will drive solutions and the idea of both bing replaced through channels of grey (both true and false) will be in first a hindrance and when you translate this to greyscales, the Adobe approach will start making sense. Adobe excels in this field and when we set the ‘colorful’ approach of both True and False, we get a new dimension and Adobe has worked in that setting for decades, long before the Trinary idea became a reality. 

So is this a figment of my imagination?
It is a fair question. As I said there is a lot of speculation through the date here and as I see it, there is a decent reason to doubt me. I will not deny this, but those deep into DML and LLM’s will see that I am speaking true, not false and that is the start of the next cycle. A setting where LISP is adjusted for trinary chips will be the larger concern. And I got to that point at least half a decade ago. So when Google and Amazon figure out what to do we get a new dance floor, a boxing square where the lights influences the shadows and that will lead to the next iteration of this solution. Consider one of two flawed visions. One is that a fourth dimension cases a 3D shadow, by illuminating the concept of these multiple 3D shadows the computer can work out 4D data constraints. The image of a dot was the shade of a line, the image of a 2D shape was the shadow of a 3D image and so on. When the AI gets that consideration (this is a flaky example, but it is the one that is in my mind) and it can see the multitude of 3D images, it can figure out the truth of the 4D datasets and it can actually fill in the blanks. Not the setting that NIP gives us now, like a chess computer that has all the games of history in its mind, so it can figure out with some precision what comes next. That concept can be defeated by making what some chess players call ‘A silly move’, now we are in the setting of more as BOTH allows for more and the stage can be illustrated by an actual AI to figure out what should be really likely to be there. Not guess work, but the different images make a setting of nonrepudiation to a larger degree, the image could only have been gotten by what should have been there in the first place. And that is a massive calculation, don’t think it won’t be deniable, the data that Nth 3D images gives us set the larger solution to a given fact. It is the result of 3 seconds of calculations, the result to a setting the brain could not work out in months. 

It is the next step. At that point the computer will not take an educated guess, it will figure out what the singular solution would be. The setting that the added BOTH allows for. 

A proud setting as I might actually still be alive to see this reality come to pass. I doubt I will be alive to see the actual emergence of an Artificial Intelligence, but the start on that track was made in my lifetime. And with the other (unmentioned) fact, I am feeling pretty proud today. And it isn’t even lunchtime yet. Go figure.

Have a great day today.

Leave a comment

Filed under Finance, IT, Science

IT said vs IT said

This is a setting we are about to enter. It was never rocket science, it was simplicity itself. And I mentioned it before, but now Forbes is also blowing the trumpet I mentioned in a clarion call in the past. The article (at https://www.forbes.com/councils/forbestechcouncil/2025/07/11/hallucination-insurance-why-publishers-must-re-evaluate-fact-checking/) gives us ‘Hallucination Insurance: Why Publishers Must Re-Evaluate Fact-Checking’ with “On May 20, readers of the Chicago Sun-Times discovered an unusual recommendation in their Sunday paper: a summer reading list featuring fifteen books—only five of which existed. The remaining titles were fabricated by an AI model.” We have seen these issues in the past. A Law firm stating cases that never existed is still my favourite at present. We get in continuation “Within hours, readers exposed the errors across the internet, sharply criticizing the newspaper’s credibility. This incident wasn’t merely embarrassing—it starkly highlighted the growing risks publishers face when AI-generated content isn’t rigorously verified.” We can focus on the setting about the high cost of AI errors, but as soon as the cost becomes too high, the staters of this error will get a Trump card and settle out of court, with the larger population being set in the dark on all other settings. But it goes into a nice direction “These missteps reinforce the reality that AI hallucinations and fact-checking failures are a growing, industry-wide problem. When editors fail to catch mistakes before publication, they leave readers to uncover the inaccuracies. Internal investigations ensue, editorial resources are diverted and public trust is significantly undermined.” You see, verification is key here and all of them are guilty. There is not one exception to this (as far as I can tell), there was a setting I wrote about this in 2023 in ‘Eric Winter is a god’ (at https://lawlordtobe.com/2023/07/05/eric-winter-is-a-god/) there on July 5th, I noticed a simple setting that Eric Winter (that famous guy from the Rookie) played a role in The Changeling (with the famous actor George C. Scott). The issue is two fold. The first is that Eric was less than 2 years old when the movie was made. The real person was Erick Vinther (playing a Young Man(uncredited)) This simple error is still all over Google, as I see it, only IMDB has the true story. This is a simple setting, errors happen, but in over 2 years that I reported it, no one fixed this. So consider that these errors creep into a massive bulk of data, personal data becomes inaccurate, and these errors will continue to seep into other systems. The fact that Eric Winter at some point sees his biography riddled with movies and other works where his memory fades under the guise of “Did I do this?”. And there will be more, as such verification becomes key and these errors will hamper multiple systems. And in this, I have some issues on the setting that Forbes paints. They give us “This exposes a critical editorial vulnerability: Human spot-checking alone is insufficient and not scalable for syndicated content. As the consequences of AI-driven errors become more visible, publishers should take a multi-layered approach” you see, as I see it, there is a larger setting with context checking. A near impossible setting. As people rely on granularity, the setting becomes a lot more oblique. A simple  example “Standard deviation is a measure of how spread out a set of values is, relative to the average (mean) of those values.” That is merely one version, the second one is “This refers to the error in a compass reading caused by magnetic interference from the vessel’s structure, equipment, or cargo.” 

Yet the version I learned in the 70’s is “Standard deviation, the offset between true north and magnetic north. This differs per year and the offset rotates in eastern direction in English it is called the compass deviation, in Dutch the Standard Deviation and that is the simple setting on how inaccuracies and confusions are entered in data settings (aka Meta Data) and that is where we go from bad to worse. And the Forbes article illuminates one side, but it also gives rise to the utter madness that this StarGate project will to some extent become. Data upon data and the lack of verification. 

As I see it, all these firms relying on ‘their’ version of AI and in the bowels of their data are clusters of data lacking any verification. The setting of data explodes in many directions and that lack works for me as I have cleaned data for the better pat of two decades. As I see it dozens of data entry firms are looking at a new golden age. Their assistance will be required on several levels. And if you doubt me, consider builder.ai, backed my none other than Microsoft and they were a billion dollar firm and in no time they had the expected value of zero. And after the fact we learn that 700 engineers were at the heart of builder.ai (no fault of Microsoft) but in this I wonder how Microsoft never saw this. And that is merely the start. 

We can go on on other firms and how they rely on ai for shipping and customer care and the larger setting that I speculatively predict is that people will try the stump the Amazon system. As such, what will it cost them in the end? Two days ago we were given ‘Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports’, so what will they end up saving when the data mismatches will happen? Because it will happen, it will happen to all. Because these systems are not AI, they are deeper machine learning systems optionally with LLM (Large Language Modules) parts and as AI are supposed to clear new data, they merely can work on data they have, verified data to be more precise and none of these systems are properly vetted and that will cost these companies dearly. I am speculating that the people fired on this premise might not be willing to return, making it an expensive sidestep to say the least. 

So don’t get me wrong, the Forbes article is excellent and you should read it. The end gives us “Regarding this final point, several effective tools already exist to help publishers implement scalable fact-checking, including Google Fact Check Explorer, Microsoft Recall, Full Fact AI, Logically Facts and Originality.ai Automated Fact Checker, the last of which is offered by my company.” So here we see the ‘Google Fact Check Explorer’, I do not know how far this goes, but as I showed you the setting with Eric Winter has been there for years and no correction was made. Even as IMDB doesn’t have this. I stated once before that movies should be checked against the age the actors (actresses too) had at the time of the making of the movie. And flag optional issues, in the case of Eric Winter a setting of ‘first film or TV series’ might have helped. And this is merely entertainment, the least of the data settings. So what do you think will happen when Adobe or IBM (mere examples) releases new versions and there is a glitch setting these versions in the data files? How many issues will occur then? I recollect that some programs had interfaces built to work together. Would you like to see the IT manager when that goes wrong? And it will not be one IT manager, it will be thousands of them. As I personally see it, I feel confident that there are massive gaps in the assumption of data safety of these companies. So as I introduced a term in the past namely NIP (Near Intelligent Parsing) and that is the setting that these companies need to fix on. Because there is a setting that even I cannot foresee in this. I know languages, but there is a rather large setting between systems and the systems that still use legacy data, the gaps in there are (for as much as I have seen data) decently massive and that implies inaccuracies to behold. 

I like the end of the Forbes article “Publishers shouldn’t blindly fear using AI to generate content; instead, they should proactively safeguard their credibility by ensuring claim verification. Hallucinations are a known challenge—but in 2025, there’s no justification for letting them reach the public.” It is a fair approach, but there is a rather large setting towards the field of knowledge where it is applied. You see, language is merely one side of that story, the setting of measurements. As I see it (using an example) “It represents the amount of work done when a force of one newton moves an object one meter in the direction of the force. One joule is also equivalent to one watt-second.” You see, cars and engineering use Joule in multiple ways, so what happens when the data shifts and values are missed? This is all engineer and corrector based and errors will get into the data. So what happens when lives are at stake? I am certain that this example goes a lot further than mere engineers. I reckon that similar settings exist in medical application, And who will oversee these verifications?

All good questions and I cannot give you an answer, because as I see it, there is no AI, merely NIP and some tools are fine with Deeper Machine Learning, but certain people seem to believe the spin they created and that is where the corpses will show up and more often than not in the most inconvenient times. 

But that might merely be me. Well time for me to get a few hours of snore time. I have to assassinate someone tomorrow and I want it too look good for the script it serves. I am a stickler for precision in those cases. Have a great day.

Leave a comment

Filed under Finance, IT, Media, Science

The size of that

Something no woman has ever sad to me, but that is for another day. You see, the story (at https://www.datacenterdynamics.com/en/news/saudi-arabias-ai-co-humain-looking-for-us-data-center-equity-partner-targets-66gw-by-2034-with-subsidized-electricity/) In this DCD ( Data Center Dynamics) gives us ‘Saudi Arabia’s AI co. Humain looking for US data center equity partner, targets 6.6GW by 2034 with subsidized electricity’ and they throw numbers at us. First there is the money “Plans $10bn venture fund to invest in AI companies”, which seems fair enough. But after that we get “The company said that it would buy 18,000 Nvidia GB300 chips with “several hundred thousand” more on the way, that it was partnering with AWS for a $5bn ‘AI Zone,’ signed a deal with AMD for 500MW of compute, and deployed Groq chips for inference.” I reckon that will split and split again, the shares of Nvidia. Then we get the $5 billion AI zone and then the AMD deal for 500MW of compute and deployed Groq chips for a conclusion reached on the basis of evidence and reasoning. Yes, that is quite the mouthful. After that we get a pause for the “How much of Humain’s data center focus will be on Saudi-based facilities is unclear – its AMD deal mentions sites in the US.” As such, we need to see what this is all about and I am hesitant to mention conclusions for a field that I am not aware of. Yet, the nagging feeling is in the back of my mind and it is jostling in an annoying way. You see, lets employ somewhat incorrect math (I know it is not a correct way). Consider 18,000 computers draining the energy net of 500 watt per system per second. That amounts to 9,000 GW energy (speculatively), and that is just the starting 18,000. As such the setting will be several times the amount needed for fueling these AI centers. Now, I know my calculations are widely of and we are given “At first, it plans to build a 50MW data center with 18,000 Nvidia GPUs for next year, increasing to 500MW in phases. It also has 2.3 square miles of land in the Eastern Province, which could host ten 200MW data centers.” I am not attacking this, but when we take into consideration that amount of energy requirements for processors, storage, cooling and maintaining the workflow my head comes up short (it usually does) and the immediate thought is where is this power coming from? As I see it, you will need a decently build Nuclear reactor and that reactor needs to be started in about 8 hours for that timeline to be met. Feel free to doubt me, I already am. Yet the needed energy to fuel a 66GW Data centre of any kind needs massive power support. And the need for Huawei to spice up the data cables somewhat. As I roughly see it, a center like that needs to plough through all the spam internet it gets on a near 10 seconds setting. That is all the spam it can muster in a year per minute (totally inaccurate, but you get the point). The setting that the world isn’t ready for this and it is given to us all in a mere paragraph. 

Now, I do not doubt the intent of the setting and the Kingdom of Saudi Arabia is really sincere to get to the ‘AI field’ as it is set, but at present the western setting is like what builder thought it would be and overreached (as I see it) and fraudulently set the stations of what they believed AI was and blew away a billion dollars in no time at all (and dragged Microsoft along with it) as they backed this venture. This gives me donut (which I already had) on the AI field as the AI field is more robust as I saw it (leaning on the learnings of Alan Turing) and it is a lot more robust then DML (Deeper Machine Learning) and LLM (Large Language Models), it really is. And for that I fear for the salespeople who tried to sell this concept, because when they say “Alas, it didn’t work. We tried, but we aren’t ready yet”, will be met with some swift justice in the halls of Saudi Arabia. Heads will roll intuit instance and they had that coming as I foresaw this a while before 2034. (It is 2025 now, and I am already on that page). 

Merely two years ago MIT Management gave us ‘Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI’ and there we get the thing I have warned about for years “In a widely discussed interview with The New York Times, Hinton said generative intelligence could spread misinformation and, eventually, threaten humanity.” I saw this coming a mile away (in 2020, I think) You see, these salespeople are so driven to their revenue slot that they forget about Data verification and data centers require and ACTUAL AI to drag trough the data verifying it all. This isn’t some ‘futuristic’ setting of what might be, it is a certainty that non-verified data breeds inaccuracies and we will get inaccuracy on inaccuracy making things go from bad to worse. So what does that look on a 66GW system? Well, for that we merely need to look back to the 80’s when the term GIGO was invented. It is a mere setting of ‘Garbage In, Garbage Out’ no hidden snags, no hidden loopholes. A simple setting that selling garbage as data leaves is with garbage, nothing more. As such as I saw it, I looked at the article and the throwing of large numbers and people thought “Oh yes, there is a job in there for me too” and I merely thought, what will fuel this? And band that, who can manage the see-through of the data and the verification process, because with those systems in place a simple act of sabotage by adding a random data set to the chain will have irreparable consequences in that data result. 

So, as the DCD set that, they pretty much end the setting with “By 2030, the company hopes to process seven percent of the globe’s training and inference workloads. For the facilities deployed in the kingdom, Riyadh will subsidize electricity prices.” And in this my thoughts are Where is that energy coming from?” A simple setting which comes with (a largely speculative setting) that such a reactor needs to be a Generation IV reactor, which doesn’t exist yet. And in this the World Nuclear Association in 2015 suggested that some might enter commercial operation before 2030 (exact date unknown), yet some years ago we were given that the active member era were “Australia, Canada, China, the European Atomic Energy Community (Euratom), France, Japan, Russia, South Africa, South Korea, Switzerland, the United Kingdom and the United States” there is no mention of the Kingdom of Saudi Arabia and I reckon they would be presenting all kinds of voices against the Kingdom of Saudi Arabia (as well as the UAE) being the first to have one of those. It is my merely speculative nature to voice this. I am not saying that the Economic Simplified Boiling Water Reactor (ESBWR) is a passively safe generation III+ reactor could not do this, but the largest one is being build by Hitachi (a mere 4500MW) and it is not build yet. The NRC granted design approval in September 2014, and it is currently not build yet. That path started in 2011. It is 2025 now, so how long until the KSA gets its reactor? And perhaps that is not needed for my thoughts, but we see a lot of throwing of numbers, yet the DCD kept us completely in the dark on the power requirements. And as I see it the line “Riyadh will subsidize electricity prices” does not hold water as the required energy settings are not given to us (perhaps not so sexy and it does make for a lousy telethon) 

So I am personally left with questions. How about you? Have a great day and drink some irradiated tea. Makes you glow in the dark, which is good for visibility on the road and sequential traffic safety.

Leave a comment

Filed under Finance, IT, Media, Politics