Tag Archives: Artificial Intelligence

The accusers

I saw a message fly past and it took me by surprise. It was CNBC (aka Capitalistically Nothing but Crap) and the accusation was ‘Microsoft and Amazon are hurting cloud competition, UK regulator finds’ (at https://www.cnbc.com/2025/07/31/uk-cma-cloud-ruling-microsoft-amazon.html) with “The regulator is concerned that certain cloud market practices are creating a “lock-in” effect where businesses are trapped into unfavorable contractual agreements.” So, that’s a thing now? The operative word is concerned. So, is this the way former Amazon UK boss, Doug Gurr, on an interim basis is showing the world that he released the chain and necktie from Amazon?

There is ‘some’ clustering and as the setting is advocated by some the score at present is “AWS holds approximately 29-31% market share, while Microsoft Azure has around 22-24%, and Google Cloud holds about 11.5-12%” The only surprising thing here is that Google is remarkably behind Microsoft by a little over 10%. Nothing to be worried about, but still the numbers set this out. The infuriating setting by the the CMA giving us “The CMA recommended a further investigation into Microsoft and Amazon under a strict new U.K. competition law to determine whether they have “strategic market status.” I am not ‘attacking’ the CMA, but as the old credence goes “Innovators create corporations, losers create hindrance for others” I suggest you take that as it goes. 

Yet there is more behind this all. Forbes gave us last week ‘Microsoft Can’t Keep EU Data Safe From US Authorities’ (at https://www.forbes.com/sites/emmawoollacott/2025/07/22/microsoft-cant-keep-eu-data-safe-from-us-authorities/) where we see “Microsoft has admitted that it can’t protect EU data from U.S. snooping. In sworn testimony before a French Senate inquiry into the role of public procurement in promoting digital sovereignty, Anton Carniaux, Microsoft France’s director of public and legal affairs, was asked whether he could guarantee that French citizen data would never be transmitted to U.S. authorities without explicit French authorization. And, he replied, “No, I cannot guarantee it.”” And this is how Microsoft faces a near death sentence by the American administration. So much so that Microsoft seemingly is creating a data centre solely for the EU. Julia Rone gave us last year (late 2024) “It has been well acknowledged that the European Union is falling behind the US and China when it comes to cloud computing because of its lack of technological capabilities. In a recently published article, however, I argue that there is another important and often overlooked reason for EU’s laggard status: the persistent disagreement between different EU member states, which have very different visions of EU cloud policy.” I take that at face value, as I am considering (through mere speculation) that these member states are connected to American stake holders in media trying to hinder the process, but that is another matter.

So as we see ““Microsoft has openly admitted what many have long known: under laws like the Cloud Act, US authorities can compel access to data held by American cloud providers, regardless of where that data physically resides. UK or EU servers make no difference when jurisdiction lies elsewhere, and local subsidiaries or ‘trusted’ partnerships don’t change that reality,” commented Mark Boost, CEO of cloud provider Civo.” It makes me wonder how America is different from the accusations that America threw in the face of Huawei. It is like the pot calling the kettle black. And this also gives wonder where the accusation against Amazon and Microsoft ends, because the cloud field is seemingly loaded with political players. They all see that data is the ultimate currency and America (as it is near broke) needs a lot of it to pay for the lifestyle they can no longer afford. In Europe the one that stands out (at least to me) is a firm I looked at in 2023 and it is growing rapidly. It is Swedish and not connected to any of the three and could become the largest in Europe. Its long-term vision involves operating eight hyper-scale data centers and three software development hubs across Europe by 2028, employing over 3,000 people. By 2030, the company aims to operate 10 hyper-scale data centers and employ over 10,000 people. There is too much focus on 2030, as I see it the American economy collapses on itself no later than 2028 and as I speculatively see it, it will drag Japan down with itself. That setting required a larger acceleration in both Europe and Asia as America will not play nice as per late 2026. At that point too many people will see where showboat America is heading too and the reefs in that area will be phenomenal. So, as I see it, the entire political swarm behind data centers and fictive AI will require a whole new range of management and I reckon that players like Amazon and Microsoft have never been dealt these cards before, so I shudder to think what will happen when it faces accusations from the EU, the CMA and others. This aligns with the accusation (from one source) giving us “An antitrust complaint filed by Google to the European Commission in September 2024, alleging that Microsoft’s licensing terms unfairly favor its own Azure cloud platform, making it difficult and expensive to use Microsoft software like Windows Server and Office on competing clouds.” I wonder, didn’t Microsoft played a similar game with gaming?

So whilst the infighting is going on on a continued setting, I wonder where Oracle will end up being? As I see it this is rather nice, but I am accusing myself at this point that we aren’t face with a tidal wave, but merely with 5 cups of tea all stating there is a storm happening and whilst the teacups are talking to each other and showing how bad the storm is, the reality is that it is not smooth sailing, but seemingly as close to it as possible. For that you need to see where Evroc is standing, where it is going and how fast it is achieving this. The second market is Oracle, how it is progressing and who it is partnered with (pretty much everyone) and these two elements show us that there are governmental captains stating that their pond is in a dreadful state (whilst presenting their cup of tea as a much larger pool then it is) the corporate captain stating there is a storm brewing, but absent of evidence and the media is flaming every storm it can so that they can get their digital dollars. But consider that Oracle is presenting good weathers and there are alternatives whilst the media actively avoid illuminating Evroc, with only TechCrunch giving us in March “Amid calls for sovereign EU tech stack, Evroc raises $55M to build a hyper-scale cloud in Europe” there were a few more and they are all technical places. The western media is largely absent as there are no digital dollars to be made here.

So consider what you see and try to see the larger picture, because there is a lot more, but some players don’t want you to see the whole image, it distorts their profit prediction. So did you see the little hidden snag? Where is Huawei cloud? Whilst this is going on ‘Huawei hosts conference on cloud technology in Egypt’ where we see that “the event drew more than 600 government officials, business leaders, and ecosystem partners from over 10 countries and regions”, as I see it, this is a classic approach to the “While two dogs are fighting for a bone, a third runs away with it” expression. So consider that part too please.

Have a great day.

Leave a comment

Filed under Finance, IT, Politics, Science

Drowning in misrepresentation

That is the setting as I personally believe it to be. The problem isn’t me, the problem is that politicians are clueless and as such the people will end up suffering. As we get the article (at https://www.theguardian.com/technology/2025/jul/30/zuckerberg-superintelligence-meta-ai) telling us ‘Zuckerberg claims ‘super-intelligence is now in sight’ as Meta lavishes billions on AI’ the dwindling situation is overlooked. This is not on Meta or on Mark the innovator Zuckerberg, well, perhaps it is a little on him. But the setting of “Whether it’s poaching top talent away from competitors, acquiring AI startups or proclaiming that it will build data centers the size of Manhattan, Meta has been on a spending spree to boost its artificial intelligence capabilities for months now”. So, what are you missing? It is easy to miss it and unless you are savvy in data, there is absolutely no blame on you. I will blame politicians shoving the buck to a pile that has no representation and I do see that the political mind is merely ‘money savvy’, it does not have an alleged clue on data verification. There is a second point, it was given to me by someone (I don’t remember who) who gives us “All AI startups are their own shells linking to ChatGPT” I see the wisdom of that, but I never investigated that myself. You see, all these shells have issues with verification and these startups don’t have the resources to properly verify the data they have, so you end up having a bucket with badly arranged and misliked data. You would think that if they all link to ChatGPT it is a singular issue, but it is not. Language is one, interpretation of what is, is another side and these are merely two sides in a much larger issue. And hiding behind “build data centers the size of Manhattan” is nothing else than a massive folly. You see, what will power this? Most places in this world have a clear shortage of power and any data centre relying on power that isn’t there will crash with some regularity and these data links are maintained in real time, so links will go wrong again and again. And that link is seen by ‘some’ as “A new study of a dozen A.I. -detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I. -generated about 6.8 percent of the time, on average” that implies that 1 in 15 statements are riddles with errors and there is no way around it until the verification passes are sorted out. Consider that one source gives us “monthly searches to more than 30.4 million during the last month”, this gives us that AI events resulted in 2,026,666 possible erroneous results and when that happens to something that was essential to your needs? When technical support and customer care fails because the number, aren’t right? How long will you remain a customer? That is the folly I am foreseeing and when all these firms (like Microsoft) are done shedding their people and they realise that the knowledge they actually had was pushed out of the side door? Where does this leave the customers? Will they remain Microsoft, Amazon, IBM or Google customers? This is about to hit nearly every niche in America business. The ones that held on the their people knowledge base tend to be decently safe, but the resources needed to clean up the mess that this created will scuttle the European and American economies as they overextended the new they spun themselves and when reality catches up, these people will see the dark light of a self created nightmare.

So in retrospect consider “Behind the hype of Microsoft backing and a $1B+ valuation, the company reportedly inflated numbers, burned through ~$450M funding, and collapsed into insolvency.” This setting was hyped on every channel and praised as a solution. It took less then a year to go from a billion to naught. How many even have a billion? Considering that Microsoft backed it, implies that they were unaware how they were, driven by a simple setting that should have been verified before they even backed it to over a $1,000,000,000 plus.

Now, we can feel sorry for Zuckerberg, not for the money, he probably has more in his wallet, but the ones wanting in on such a ‘great endeavor’ are bound to lose everything they own. This is a very slippery slope and as governments are seeing what some call as AI as a solution to solve a expensive setting in a cheap way are likely to lose the ownership of data of their entire population and these systems do not care who the owner is, they copy EVERYTHING. So where will that data end up going? I wonder who looked at the ownership of collected data and all the errors it has within itself.

The fear is not what it costs, but for billions of people is where their information will end up being and these politicians sell ‘sort of solutions’ which they cannot back with facts and in the end it will end up being the problem of a software engineer and that setting was too complicated to understand for any politician who was too eager to put his name under this and merely will shrug saying ‘I’m sorry’ whilst he is exiting through any side door with his personal wallet filled to the brink to a zero tax nation with a non-extradition treaty.

A setting we will see the media repeat time after time without seriously digging into the mess as they told us “Wall Street investors are happy with the expensive course Zuckerberg is charting. After the company reported better-than-expected financial results for yet another quarter, its stock soared by double digits.” All whilst the statement “Zuckerberg did not provide any details of what would qualify as “super-intelligence” versus standard artificial intelligence, he did say that it would pose “novel safety concerns”. “We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source,”” is trivialized to the largest degree and in all this there is no setting of verification. Weird isn’t it? 

So feel free to enjoy you cub of toffee and don’t worry about the jacked setting of demonstration which was tracked by the original AI as “enjoy your cup of coffee and don’t worry about the impact of verification” because that is the likely heading of the coming super-intelligence

Have a great day (not have a grate clay).

Leave a comment

Filed under Finance, IT, Media, Politics, Science

Where should we look?

That is at times the issue, I would add to this “especially when we consider corporations the size of Microsoft” but this is nothing directly on Microsoft (I emphasize this as I have been dead set against some ‘issues’ Microsoft dealt us to). This is different and I have two articles that (to some aspect) overlap, but they are not the same and overlap should be subjectively seen.

The first one is BBC (at https://www.bbc.com/news/articles/c4gdnz1nlgyo) where we see ‘Microsoft servers hacked by Chinese groups, says tech giant’ where the first thought that overwhelmed me was “Didn’t you get Azure support arranged through China?” But that is in the back of my mind. We are given “Chinese “threat actors” have hacked some Microsoft SharePoint servers and targeted the data of the businesses using them, the firm has said. China state-backed Linen Typhoon and Violet Typhoon as well as China-based Storm-2603 were said to have “exploited vulnerabilities” in on-premises SharePoint servers, the kind used by firms, but not in its cloud-based service.” I am wondering about the quote “not in its cloud-based service” I have questions, but I am not doubting the quote. To doubt it, one needs to have in-depth knowledge and be deeply versed in Azure and I am not one of these people. As I personally see it, if one is transgressed upon, the opportunity rises to ‘infect’ both, but that might be my wrong look on this. So as we are given ““China firmly opposes and combats all forms of cyber attacks and cyber crime,” China’s US embassy spokesman said in a statement. “At the same time, we also firmly oppose smearing others without solid evidence,” continued Liu Pengyu in the statement posted on X. Microsoft said it had “high confidence” the hackers would continue to target systems which have not installed its security updates.” This makes me think about the UN/USA attack on Saudi Arabia regarding that columnist no one cares about, giving us the ‘high confidence’ from the CIA. It sounds like the start of a smear campaign. If you have evidence, present the evidence. If not, be quiet (to some extent). 

We then get someone who knows what he in talking about “Charles Carmakal, chief technology officer at Mandiant Consulting firm, a division of Google Cloud, told BBC News it was “aware of several victims in several different sectors across a number of global geographies”. Carmakal said it appeared that governments and businesses that use SharePoint on their sites were the primary target.” This is where I got to thinking, what is the problem with Sharepoint? And when we consider  the quote “Microsoft said Linen Typhoon had “focused on stealing intellectual property, primarily targeting organizations related to government, defence, strategic planning, and human rights” for 13 years. It added that Violet Typhoon had been “dedicated to espionage”, primarily targeting former government and military staff, non-governmental organizations, think tanks, higher education, the media, the financial sector and the health sector in the US, Europe, and East Asia.

It sounds ‘nice’ but it flows towards the thoughts like “related to government, defence, strategic planning, and human rights” for 13 years”, so were was the diligence to preventing issues with Sharepoint and cyber crime prevention? So consider that we are given “SharePoint hosts OneDrive for Business, which allows storage and synchronization of an individual’s personal work documents, as well as public/private file sharing of those documents.” That quote alone should have driven the need for much higher Cyberchecks. And perhaps they were done, but as I see it, it has been an unsuccessful result. It made me (perhaps incorrectly) think so many programs covering Desktops, Laptops, tablets and mobiles over different systems a lot more cyber requirements should have been in place and perhaps they are, but it is not working and as I see, it as this solution has been in place for close to 2 decades, the stage of 13 years of attempted transgression, the solution does not seem to be safe. 

And the end quote “Meanwhile, Storm-2603 was “assessed with medium confidence to be a China-based threat actor””, as such, we stopped away from ‘high confidence’ making this setting a larger issue. And my largest issue is when you look to find “Linen Typhoon” you get loads of links, most of them no older than 5 days. If they have been active for 13 years. I should have found a collection of articles close to a decade old, but I never found them. Not in over a dozen of pages of links. Weird, isn’t it? 

The next part is one that comes from TechCrunch (at https://techcrunch.com/2025/07/22/google-microsoft-say-chinese-hackers-are-exploiting-sharepoint-zero-day/) where we are given ‘Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day’ and this is important as a zero-day, which means “The term “zero-day” originally referred to the number of days since a new piece of software was released to the public, so “zero-day software” was obtained by hacking into a developer’s computer before release. Eventually the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them.” This implies that this issue has been in circulation for 23 years. And as this implies that there is a much larger issue as the software solution os set over iOS, Android and Windows Server. Microsoft was eager to divulge that this solution is ‘available’ to over 200 million users as of December 2020. As I see it, the danger and damage might be spread by a much larger population. 

Part of the issues is that there is no clear path of the vulnerability. When you consider the image below (based on a few speculations on how the interactions go) 

I get at least 5 danger points and if there a multiple servers involved, there will be more and as we are given “According to Microsoft, the three hacking groups were observed exploiting the zero-day vulnerability to break into vulnerable SharePoint servers as far back as July 7. Charles Carmakal, the chief technology officer at Google’s incident response unit Mandiant, told TechCrunch in an email that “at least one of the actors responsible” was a China-nexus hacking group, but noted that “multiple actors are now actively exploiting this vulnerability.”” I am left with questions. You see, when was this ‘zero day’ exploit introduced? If it was ‘seen’ as per July 7, when was the danger in this system solution? There is also a lack in the BBC article as to properly informing people. You cannot hit Microsoft with a limited information setting when the stakes are this high. Then there is the setting of what makes Typhoon sheets (linen) and the purple storm (Violet Typhoon) guilty as charged (charged might be the wrong word) and what makes the March 26th heavy weather guilty? 

I am not saying they cannot be guilty, I am seeing a lack of evidence. I am not saying that the people connecting should ‘divulge’ all, but more details might not be the worst idea. And I am not blaming Microsoft here. I get that there is (a lot) more than meets the eye (making Microsoft a Constructicon) But the lack of information makes the setting one of misinformation and that needs to be said. The optional zero day bug is one that is riddles of missing information. 

So then we get to the second article which also comes from the BBC (at https://www.bbc.com/news/articles/czdv68gejm7o) given us ‘OpenAI and UK sign deal to use AI in public services’ where we get “OpenAI, the firm behind ChatGPT, has signed a deal to use artificial intelligence (AI) to increase productivity in the UK’s public services, the government has announced. The agreement signed by the firm and the science department could give OpenAI access to government data and see its software used in education, defence, security, and the justice system.”  Microsoft put billions into this and this is a connected setting. How long until the personal data of millions of people will be out in the open for all kinds of settings? 

So as we are given “But digital privacy campaigners said the partnership showed “this government’s credulous approach to big tech’s increasingly dodgy sales pitch”. The agreement says the UK and OpenAI may develop an “information sharing programme” and will “develop safeguards that protect the public and uphold democratic values”.” So, data sharing? Why not get another sever setting and the software solution is also set to the government server? When you see some sales person give you that there will be ‘additional safeties installed’ know that you are getting bullshitted. Microsoft made similar promises in 2001 (code red) and even today the systems are still getting traversed on and those are merely the hackers. The NSA and other America governments get near clean access to all of it and that is a problem with American based servers and still here, there is only so much that the GDPR (General Data Protection Regulation) allows for and I reckon that there are loopholes for training data and as such I reckon that the people in the UK will have to set a name and shame setting with mandatory prosecution for anyone involved with this caper going all the way up to Prime Minister Keir Starmer. So when you see mentions like ““treasure trove of public data” the government holds “would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT”” I would be mindful to hand or give access to this data and not let it out of your hands. 

This link between the two is now clear. Data and transgressions have been going on since before 2001 and the two settings when data gets ‘trained’ we are likely to see more issues and when Prime Minister Keir Starmer goes “were sorry”, you better believe that the time has come to close the tap and throw Microsoft out of the windows in every governmental building in the Commonwealth. I doubt this will be done as some sales person will heel over like a little bitch and your personal data will become the data of everyone who is mentionable and they will then select the population that has value for commercial corporations and the rest? The rest will become redundant by natural selection according to value base of corporations. 

I get that you think this is now becoming ‘conspiracy based’ settings and you resent them. I get that, I honestly do. But do you really trust UK Labor after they wasted 23 billion pounds on an NHS system that went awry (several years ago). I have a lot of problems showing trust in any of this. I do not blame Microsoft, but the overlap is concerning, because at some point it will involve servers and transfers of data. And it is clear there are conflicting settings and when some one learns to aggregate data and connect it to a mobile number, your value will be determined. And as these systems interconnect more and more, you will find out that you face identity threat not in amount of times, but in identity theft and value assessment in once per X amount of days and as X decreases, you pretty much can rely on the fact that your value becomes debatable and I reckon this setting is showing the larger danger, where one sees your data as a treasure trove and the other claims “deliver prosperity for all”. That and the diminished setting of “really be done transparently and ethically, with minimal data drawn from the public” is the setting that is a foundation of nightmares mainly as the setting of “minimal data drawn from the public” tends to have a larger stage. It is set to what is needed to aggregate to other sources which lacks protection of the larger and and when we consider that any actor could get these two connected (and sell on) should be considered a new kind of national security risk. America (and UK) are already facing this as these people left for the Emirates with their billions. Do you really think that this was the setting? It will get worse as America needs to hang on to any capital leaving America, do you think that this is different for the UK? Now, you need to consider what makes a person wealthy. This is not a simple question as it is not the bank balance, but it is an overlap of factors. Consider that you have 2000 people who enjoy life and 2000 who are health nuts. Who do you think is set to a higher value? The Insurance person states the health nut (insurance without claims) or the retailer the people who spend and life live. And the (so called) AI system has to filter in 3000 people. So, who gets to be disregarded from the equation? And this cannot be done until you have more data and that is the issue. And the quotation is never this simple, it will be set to thousands of elements and these firms should not have access, as such I fear for the data making it to the outer UK grounds. 

A setting coming from overlaps and none of this is the fault of Microsoft but they will be connected (and optionally) blamed for all this, but as I personally see it the two elements that matter in this case are “Digital rights campaign group Foxglove called the agreement “hopelessly vague”” and “Co-executive Director Martha Dark said the “treasure trove of public data” the government holds” will be of significance danger to public data, because greed driven people tend to lose their heads over words like ‘treasure trove’ and that is where ‘errors are made’ and I reckon it will not take long before the BBC or other media station will trip up over the settings making the optional claim that ‘glitches were found in the current system’ and no one was to blame. Yet that will not be the whole truth will it?

So have a great day and consider the porky pies you are told and who is telling them to you, should you consider that it is me. Make sure that you realise that I am merely telling you what is out in the open and what you need to consider. Have a great day.

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Speculating on language

That was the setting I found myself in. There is the specific on an actual AI language, not the ones we have, but the one we need to create. You see, we might be getting close to trinary chips. You see, as I personally see it, there is no AI as the settings aren’t ready for it (I’ve told that before), but we might be getting close to it as the Dutch physicist has had a decade to set the premise of the proven Epsilon particle to a more robust setting and it has been a decade (or close to it) and that sets the larger premise that an actual AI might become a reality (were still at least a decade away), but in that setting we need to reconsider the programming language. 

BinaryTrinary
NULLNULL
TRUETRUE
FALSEFALSE

BOTH

We are in a binary digital world at present and it has served our purpose, but for an actual AI it does not suffice. You can believe the wannabe’s going on about we can do this, we can do that and it will come up short. Wannabe’s who will hide behind data tables in data tables solutions and for the most (as far as I saw it) only Oracle ever got that setting to work correctly. The rest merely grazes on that premise. You see, to explain this in the simplest of ways. Any intelligence doesn’t hide behind black or white. It is a malleable setting of grey, as such both colors are required and that is where Trinary systems with both true and false activated will create the setting an AI needs. When you realise this, you see the bungles the business world needs to hide behind. They will sell these programmers (or engineers) down the drain at a moments notice (they will refer to it as corporate restructuring) and that will put thousands out of a job and the largest data providers in class action suits from start to up the wazoo. 

When you see what I figured out a decade ago, the entire “AI” field is driven to nothing short of collapse. 

My mind kept it in the back of my mind and it worked on the solutions it had figured out. So as I see it something like C#+ is required. An extended version of C# with LISP libraries (the IBM version) as the only one I also had was a Borland program and I don’t think it will make the grade. As I personally see it (with my lack of knowledge) is that LISP might be a better fit to connect to C#. You see, this is the next step. As I see it ‘upgrading’ C# is one setting, but LISP has the connectors required to make it work and why reinvent the wheel? And when the greedy salespeople figure out what they missed over the last decade (the larger part of it) they will come with statements that it was a work in progress and that they are still addressing certain items. Weird, I got there a decade ago and they didn’t think I was the right material. As such you can file their versions in a folder called ‘What makes the grass grow in Texas?’ (Me having a silly grin now). I still haven’t figured it all out, but with the trinary chip we will be on the verge of getting an actual AI working. Alas, the chip comes long after we bid farewell to Alan Turing as he would have been delighted to see that moment happen. The setting of gradual verification, a setting of data getting verified on the fly will be the next best thing and when the processor gives us grey scales that matter, we will see that contemplated ideas that will drive any actual AI system forward. It will not be pretty at the start. I reckon that IBM, Google and Amazon will drive this And there is a chance that they all will unite with Adobe to make new strides. You think I am kidding, but I am not. You see, I refer to greyscales on purpose. The setting of true and false is only partially true. The combination of the approach of BOTH will drive solutions and the idea of both bing replaced through channels of grey (both true and false) will be in first a hindrance and when you translate this to greyscales, the Adobe approach will start making sense. Adobe excels in this field and when we set the ‘colorful’ approach of both True and False, we get a new dimension and Adobe has worked in that setting for decades, long before the Trinary idea became a reality. 

So is this a figment of my imagination?
It is a fair question. As I said there is a lot of speculation through the date here and as I see it, there is a decent reason to doubt me. I will not deny this, but those deep into DML and LLM’s will see that I am speaking true, not false and that is the start of the next cycle. A setting where LISP is adjusted for trinary chips will be the larger concern. And I got to that point at least half a decade ago. So when Google and Amazon figure out what to do we get a new dance floor, a boxing square where the lights influences the shadows and that will lead to the next iteration of this solution. Consider one of two flawed visions. One is that a fourth dimension cases a 3D shadow, by illuminating the concept of these multiple 3D shadows the computer can work out 4D data constraints. The image of a dot was the shade of a line, the image of a 2D shape was the shadow of a 3D image and so on. When the AI gets that consideration (this is a flaky example, but it is the one that is in my mind) and it can see the multitude of 3D images, it can figure out the truth of the 4D datasets and it can actually fill in the blanks. Not the setting that NIP gives us now, like a chess computer that has all the games of history in its mind, so it can figure out with some precision what comes next. That concept can be defeated by making what some chess players call ‘A silly move’, now we are in the setting of more as BOTH allows for more and the stage can be illustrated by an actual AI to figure out what should be really likely to be there. Not guess work, but the different images make a setting of nonrepudiation to a larger degree, the image could only have been gotten by what should have been there in the first place. And that is a massive calculation, don’t think it won’t be deniable, the data that Nth 3D images gives us set the larger solution to a given fact. It is the result of 3 seconds of calculations, the result to a setting the brain could not work out in months. 

It is the next step. At that point the computer will not take an educated guess, it will figure out what the singular solution would be. The setting that the added BOTH allows for. 

A proud setting as I might actually still be alive to see this reality come to pass. I doubt I will be alive to see the actual emergence of an Artificial Intelligence, but the start on that track was made in my lifetime. And with the other (unmentioned) fact, I am feeling pretty proud today. And it isn’t even lunchtime yet. Go figure.

Have a great day today.

Leave a comment

Filed under Finance, IT, Science

IT said vs IT said

This is a setting we are about to enter. It was never rocket science, it was simplicity itself. And I mentioned it before, but now Forbes is also blowing the trumpet I mentioned in a clarion call in the past. The article (at https://www.forbes.com/councils/forbestechcouncil/2025/07/11/hallucination-insurance-why-publishers-must-re-evaluate-fact-checking/) gives us ‘Hallucination Insurance: Why Publishers Must Re-Evaluate Fact-Checking’ with “On May 20, readers of the Chicago Sun-Times discovered an unusual recommendation in their Sunday paper: a summer reading list featuring fifteen books—only five of which existed. The remaining titles were fabricated by an AI model.” We have seen these issues in the past. A Law firm stating cases that never existed is still my favourite at present. We get in continuation “Within hours, readers exposed the errors across the internet, sharply criticizing the newspaper’s credibility. This incident wasn’t merely embarrassing—it starkly highlighted the growing risks publishers face when AI-generated content isn’t rigorously verified.” We can focus on the setting about the high cost of AI errors, but as soon as the cost becomes too high, the staters of this error will get a Trump card and settle out of court, with the larger population being set in the dark on all other settings. But it goes into a nice direction “These missteps reinforce the reality that AI hallucinations and fact-checking failures are a growing, industry-wide problem. When editors fail to catch mistakes before publication, they leave readers to uncover the inaccuracies. Internal investigations ensue, editorial resources are diverted and public trust is significantly undermined.” You see, verification is key here and all of them are guilty. There is not one exception to this (as far as I can tell), there was a setting I wrote about this in 2023 in ‘Eric Winter is a god’ (at https://lawlordtobe.com/2023/07/05/eric-winter-is-a-god/) there on July 5th, I noticed a simple setting that Eric Winter (that famous guy from the Rookie) played a role in The Changeling (with the famous actor George C. Scott). The issue is two fold. The first is that Eric was less than 2 years old when the movie was made. The real person was Erick Vinther (playing a Young Man(uncredited)) This simple error is still all over Google, as I see it, only IMDB has the true story. This is a simple setting, errors happen, but in over 2 years that I reported it, no one fixed this. So consider that these errors creep into a massive bulk of data, personal data becomes inaccurate, and these errors will continue to seep into other systems. The fact that Eric Winter at some point sees his biography riddled with movies and other works where his memory fades under the guise of “Did I do this?”. And there will be more, as such verification becomes key and these errors will hamper multiple systems. And in this, I have some issues on the setting that Forbes paints. They give us “This exposes a critical editorial vulnerability: Human spot-checking alone is insufficient and not scalable for syndicated content. As the consequences of AI-driven errors become more visible, publishers should take a multi-layered approach” you see, as I see it, there is a larger setting with context checking. A near impossible setting. As people rely on granularity, the setting becomes a lot more oblique. A simple  example “Standard deviation is a measure of how spread out a set of values is, relative to the average (mean) of those values.” That is merely one version, the second one is “This refers to the error in a compass reading caused by magnetic interference from the vessel’s structure, equipment, or cargo.” 

Yet the version I learned in the 70’s is “Standard deviation, the offset between true north and magnetic north. This differs per year and the offset rotates in eastern direction in English it is called the compass deviation, in Dutch the Standard Deviation and that is the simple setting on how inaccuracies and confusions are entered in data settings (aka Meta Data) and that is where we go from bad to worse. And the Forbes article illuminates one side, but it also gives rise to the utter madness that this StarGate project will to some extent become. Data upon data and the lack of verification. 

As I see it, all these firms relying on ‘their’ version of AI and in the bowels of their data are clusters of data lacking any verification. The setting of data explodes in many directions and that lack works for me as I have cleaned data for the better pat of two decades. As I see it dozens of data entry firms are looking at a new golden age. Their assistance will be required on several levels. And if you doubt me, consider builder.ai, backed my none other than Microsoft and they were a billion dollar firm and in no time they had the expected value of zero. And after the fact we learn that 700 engineers were at the heart of builder.ai (no fault of Microsoft) but in this I wonder how Microsoft never saw this. And that is merely the start. 

We can go on on other firms and how they rely on ai for shipping and customer care and the larger setting that I speculatively predict is that people will try the stump the Amazon system. As such, what will it cost them in the end? Two days ago we were given ‘Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports’, so what will they end up saving when the data mismatches will happen? Because it will happen, it will happen to all. Because these systems are not AI, they are deeper machine learning systems optionally with LLM (Large Language Modules) parts and as AI are supposed to clear new data, they merely can work on data they have, verified data to be more precise and none of these systems are properly vetted and that will cost these companies dearly. I am speculating that the people fired on this premise might not be willing to return, making it an expensive sidestep to say the least. 

So don’t get me wrong, the Forbes article is excellent and you should read it. The end gives us “Regarding this final point, several effective tools already exist to help publishers implement scalable fact-checking, including Google Fact Check Explorer, Microsoft Recall, Full Fact AI, Logically Facts and Originality.ai Automated Fact Checker, the last of which is offered by my company.” So here we see the ‘Google Fact Check Explorer’, I do not know how far this goes, but as I showed you the setting with Eric Winter has been there for years and no correction was made. Even as IMDB doesn’t have this. I stated once before that movies should be checked against the age the actors (actresses too) had at the time of the making of the movie. And flag optional issues, in the case of Eric Winter a setting of ‘first film or TV series’ might have helped. And this is merely entertainment, the least of the data settings. So what do you think will happen when Adobe or IBM (mere examples) releases new versions and there is a glitch setting these versions in the data files? How many issues will occur then? I recollect that some programs had interfaces built to work together. Would you like to see the IT manager when that goes wrong? And it will not be one IT manager, it will be thousands of them. As I personally see it, I feel confident that there are massive gaps in the assumption of data safety of these companies. So as I introduced a term in the past namely NIP (Near Intelligent Parsing) and that is the setting that these companies need to fix on. Because there is a setting that even I cannot foresee in this. I know languages, but there is a rather large setting between systems and the systems that still use legacy data, the gaps in there are (for as much as I have seen data) decently massive and that implies inaccuracies to behold. 

I like the end of the Forbes article “Publishers shouldn’t blindly fear using AI to generate content; instead, they should proactively safeguard their credibility by ensuring claim verification. Hallucinations are a known challenge—but in 2025, there’s no justification for letting them reach the public.” It is a fair approach, but there is a rather large setting towards the field of knowledge where it is applied. You see, language is merely one side of that story, the setting of measurements. As I see it (using an example) “It represents the amount of work done when a force of one newton moves an object one meter in the direction of the force. One joule is also equivalent to one watt-second.” You see, cars and engineering use Joule in multiple ways, so what happens when the data shifts and values are missed? This is all engineer and corrector based and errors will get into the data. So what happens when lives are at stake? I am certain that this example goes a lot further than mere engineers. I reckon that similar settings exist in medical application, And who will oversee these verifications?

All good questions and I cannot give you an answer, because as I see it, there is no AI, merely NIP and some tools are fine with Deeper Machine Learning, but certain people seem to believe the spin they created and that is where the corpses will show up and more often than not in the most inconvenient times. 

But that might merely be me. Well time for me to get a few hours of snore time. I have to assassinate someone tomorrow and I want it too look good for the script it serves. I am a stickler for precision in those cases. Have a great day.

Leave a comment

Filed under Finance, IT, Media, Science

SYSMIS(plenty)

Yes, this is sort of a hidden setting, but if you know the program you will be ahead of the rest (for now). Less then an hour ago I saw a picture with Larry Ellison (must be an intelligent person as we have the same first two letters in our first name). But the story is not really that, perhaps it is, but i’ll get to that later.

I will agree with the generic setting that most of the most valuable data will be seen in Oracle. It is the second part I have an issue with (even though it sounds correct), yes AI demands is skyrocketing. But as I personally see it AI does not exist. There is Generic AI, there are AI agents and there are a dozen settings under the sun advocating a non existing realm of existence. I am not going into this, as I have done that several times before. You see, what is called AI is as I see it mere NIP (Near Intelligent Parsing) and that does need a little explaining. 

You see, like the old chess computers (90’s) they weren’t intelligent, they merely had in memory every chess game ever played above a certain level. And all these moves were in these computers. As such there was every chance that the chess computer came into a setting where that board was encountered before and as such it tried to play from that point onwards. It is a little more advanced than that, but that was the setting we faced. And would you have it, some greed driven salesperson will push the boundary towards that setting where he (or she) will claim that the data you have will result in better sales. But (a massive ‘but’ comes along) that is assuming all data is there and mostly that is never the case. So if we see the next image

You see that some cells are red, there we have no data and data that isn’t there cannot be created (sort of). In Market Research it is called System Missing data. They know what to do in those case, but the bulk of all the people trying to run and hide behind there data will be in the knowing nothing pool of people. And this data set has a few hidden issues. Response 6 and 7 are missing. So were they never there? Is there another reason? All things that these AI systems are unaware of and until they are taught what to do your data will create a mess you never saw before. Sales people (for the most) do not see it that way, because they were sold an AI system. Yet until someone teaches them what to do they aren’t anything of the sort and even after they are taught there are still gaps in their knowledge because these systems will not assume until told so. They will not even know what to do when it goes wring until someone tells them that and the salespeople using these systems will revert to ‘easy’ fixes, which are not fixes at all, they merely see the larger setting that becomes less and less accurate in record time. They will rely on predictive analytics, but that solution can only work with data that is there and when there is no data, there is merely no data to rely on. And that is the trap I foresaw in the case of [a censored software company] and the UAE and oil. There is too much unknowns and I reckon that the oil industry will have a lot more data and bigger data, but with human elements in play, we will see missing data. And the better the data is, the more accurate the results. But as I saw it, errors start creeping in and more and more inaccuracies are set to the predictive data set and that is where the problems start. It is not speculative, it is a dead certainty. This will happen. No matter how good you are, these systems are build too fast with too little training and too little error seeking. This will go wrong. Still Larry is right “Most Of The World’s Valuable Data Is in some system

The problem is that no dataset is 100% complete, it never was and that is the miscalculations to CEO’s of tomorrow are making. And the assumption mode of the sales person selling and the sales person buying are in a dwindling setting as they are all on the AI mountain whilst there is every chance that several people will use AI as a gimmick sale and they don’t have a clue what they are buying, all whilst these people sign a ‘as is’ software solution. So when this comes to blows, the impact will be massive. We recently saw Microsoft standing behind builder.ai and it went broke. It seems that no one saw the 700 engineers programming it all (in this case I am not blaming Microsoft) but it leaves me with questions. And the setting of “Stargate is a $500 billion joint venture between OpenAI, SoftBank, Oracle, and investment firm MGX to build a massive AI infrastructure in the United States. The project, announced by Donald Trump, aims to establish the US as a leader in AI by constructing large-scale data centers and advancing AI research. Initial construction is underway in Texas, with plans for 20 data centers, each 500,000 square feet, within the next five years” leaves me with more questions. I do not doubt that OpenAI, SoftBank and Oracle all have the best intentions. But I have two questions on this. The first is how to align and verify the data, because that will be an adamant and also a essential step in this. Then we get to the larger setting that the dat needs to align within itself. Are all the phrases exact? I don’t know this is why I ask and before you say that it makes sense that they do but reality gives us ‘SQUARE-WINDOWED AIRPLANES’ 1954 when two planes broke apart in mid-flight because metal fatigue was causing small cracks to form at the edges of the windows, and the pressurized cabins exploded. Then we have the ‘MARS ORBITER’ where two sets of engineers, one working in metric and the other working in the U.S. imperial system, failed to communicate at crucial moments in constructing the $125 million spacecraft. We tend to learn when we stumble that is a given, so what happens when issues are found in the 11th hour in a 500 billion dollar setting? It is not unheard of and as I saw one particular speculative setting. How is this powered? A system on 500,000 square feet needs power and 20 of them a hell of a lot more. So how many nuclear reactors are planned? I actually have an interesting idea (keeping this to me for now). But any computer that leaks power will go down immediately and all those training time is lost. How often does that need to happen for it to go wrong? You can train and test systems individually but 20 data centers need power, even one needs power and how certain is that power grid? I actually saw nothing of that in any literature (might be that only a few have seen that), but the drastic setting from sales people tends to be, lets put in more power. But where from? Power is finite until created in advance and that is something I haven’t seen. And then the time setting ‘within the next 5 years’ As I see it, this is a disaster waiting to happen. And as this starts in Texas, we have the quote “According to Texas native, Co-Founder and CFO of Atma Energy, Jaro Nummikoski, one of the main reasons Texas struggles with chronic power outages is the way our grid was originally designed—centralized power plants feeding energy over long distances through aging infrastructure.” Now I am certain that the power-grid of a data centre will be top notch, but where does that power come from? And 500,000 sqft needs a lot of power, I honestly do not know how much One source gave me “The facilities need at least 50 Megawatts (MW) of power supply, but some installations surpass this capacity. The energy requirements of the project will increase to 15 Gigawatts (GW) because of the ten data centers currently under construction, which equals the electricity usage of a small nation.” As such the call for a nuclear reactor comes to mind, yet the call for 15 GW is insane, and no reactor at present exists to handle that. 50MW per data center implies that where there is a data centre a reactor will be needed (OK, this is an exaggeration) but where there are more than one (up to 4) a reactor will be needed. So who was aware of this? I reckon that the first centre in Texas will get a reactor as Texas has plenty of power shortages and the increase in people and systems warrant such a move. But as far as I know those things will require a little more than 5 years and depending on the provider there are different timelines. As such I have reasons to doubt the 5 year setting (even more when we consider data). 

As such I wonder when the media will actually look at the settings and what will be achievable as well as being implemented and that is before we get to the training of data of these capers. As I personally (and speculatively) see it, will these data centers come with a warning light telling us SYSMIS(plenty), or a ‘too many holes in data error’ just a thought to have this Tuesday. 

Have a great day and when your chest glows in the dark you might be close to one of those nuclear reactors. 

Leave a comment

Filed under Finance, IT, Media, Science

The call for investors

That is at present the larger setting, everyone wants investors and they all tend to promise the calf with golden horns. As I see it, investing in gold mining, Oil mining and a few others are near dead certain return on investments. The larger group that will seemingly want to invest in AI, the new hype word. Still, considering that Builder.ai went from a billion plus to zilch is a nice example what  Microsoft backed solutions tend to give. You see, the larger picture that everyone is ignoring is that it was baked by Microsoft. Now, this might be OK, because Microsoft is a tech company. But consider that Builder.ai (previous known as Engineer.ai) was supposed to be all ‘good’, yet the media now reports ‘Builder.ai Collapsed After Finding Sales ‘Inflated By 300 Percent’’ This leads me to believe that there was  larger problem with this DML/LLM solution. Another source gives us ‘Builder.ai’s Collapse Exposes Deceptive AI Claims, Shocking Major Investors’ and another source gives us ‘Builder.ai collapse exposes dangers of ‘FOMO investing’ in AI’ yet that is nothing compared to what I said on November 16th 2024 in ‘Is it a public service’ (at https://lawlordtobe.com/2024/11/16/is-it-a-public-service/) where I stated “a US strategy to prevent a Chinese military tech grab in the Gulf region” and it is my insight that this is a clicking clock. One tick, one tock leading to one mishap and Microsoft pretty much gives the store to China. And with that Aramco laughingly watches from the sidelines. There is no if in question. This becomes a mere shifting timeline and with every day that timeline becomes a lot more worrying.” With the added “But several sources state “There are several reasons why General AI is not yet a reality. However, there are various theories as to what why: The required processing power doesn’t exist yet. As soon as we have more powerful machines (or quantum computing), our current algorithms will help us create a General AI” or to some extent. Marketing the spin of AI does not make it so.” You see, the entire DML/LLM is not AI, as we can see from the builder.ai setting (a little presumptuous) of me, but the setting that we get inflated sales and then the Register ended their article with “The fact that it wasn’t able to convince enough customers to pay it enough money to stay solvent should give pause to those who see generative AI as a replacement for junior developers. As the experience of the unfortunate Microsoft staffers having to deal with the GitHub Copilot Agent shows, the technology still has some way to go. One day it might surpass a mediocre intern able to work a search engine, but that day is not today.” Is perhaps merely part of the problem the “the technology still has some way to go” is astute and to the point, but it is not the larger problem. It reminded me of the old market research setting, take a bucket of data and let MANOVA sort it out. The idea that a layman can sort it out is hilarious. I have met over the last half a century less than a dozen people who know that they were doing. These people are extremely rare. So whenever I hear a student tell me that they had a good solution with MANOVA, my eyes were tearing with howls of deriving laughter. And now we see a similar setting. But the larger setting is not merely the coded setting of DML and LLM. It is the stage where data is either not verified or verified in the most shallow of situations. And now consider that stage with a 500 billion solution. Data is everything there and verification is one part of that key, a key too many are seeing aside because it is not sexy enough. 

And now we get to the investors who are in “Fear Of Missing Out”, for them I have a consolation price. You see, RigZone gave me (at https://www.rigzone.com/news/adnoc_suppliers_pledge_817mm_investment_for_uae_manufacturing-27-may-2025-180646-article/) hours ago ‘ADNOC Suppliers Pledge $817MM Investment for UAE Manufacturing’, and as I see it Oil is a near certainty of achieving ROI, and as everyone is chasing the AI dream (which of course does not exist yet) those greedy hungry money people are looking away from the certainty piggybank (as I personally see it) and that kind of investment for manufacturing will bring products, sellable products and in the petrochemical industry that is like butter with the fish. A near certainty on investment. I prefer the expression ‘near certainty’ as there is always some risk, yet as I see it, ARAMCO and ADNOC are setting the bar of achievement high enough to get that done and as I see it “ADNOC said the facilities are situated throughout the Industrial City of Abu Dhabi (ICAD), Khalifa Economic Zones Abu Dhabi (KEZAD), Dubai Industrial Park, Jebel Ali Free Zone (JAFZA), Sharjah Airport International Free Zone (SAIF Zone), and Umm Al Quwain. They will generate over 3,500 high-skilled jobs in the private sector and produce a diverse array of industrial goods such as pressure vessels, pipe coatings, and fasteners.” As such the only danger is that ADNOC will not be able to fill the positions and that is at present the easiest score to settle. 

So as we see the call for investors coming from the sound of a dozen bugles, remember that the old premise that getting the call from a setting that works beats the golden horns that some promise and the investors will need another setting (or so I figure). And in the end, the larger question is why builder.ai was backed inn the first place. Microsoft has a setting with OpenAI and as one source gives me “Microsoft and OpenAI have a significant partnership, where Microsoft is a major investor and supports OpenAI’s advancements, and OpenAI provides access to powerful language models through Microsoft’s Azure platform. This partnership enables Azure OpenAI Service, which provides access to OpenAI’s models for businesses, and it also includes a revenue-sharing agreement.” I cannot vouch for the source, but the idea is when this is going on, why go to it with builder.ai? And was builder.ai vetted? The entire setting is raising more questions than I normally would have (sellers have their own agenda and including Microsoft in this is ‘to them’ a normal setting) I do not oppose that, but when we see this interaction, I wonder how dangerous that Stargate will be and $500,000,000,000 ain’t hay. 

And going back to ADNOC we see “ADNOC’s commercial agreements under the In-Country Value (ICV) program have enabled facilities that allow businesses to benefit from diverse commercial opportunities, the company said. The ICV program aims to manufacture AED90 billion ($24.5 billion) worth of products locally in its procurement pipeline by 2030.” More impressive is the quote “ADNOC’s ICV program has contributed AED242 billion ($65.8 million) to the UAE economy and created 17,000 jobs for UAE nationals since 2018, according to the company.” You see, such a move makes sense as the UAE produces 3.22 million barrels per day, that has been achieved from 2024 onward and some say that they exceeded their quota (by how much is unknown to me). But that makes sense as an investment, the entire fictive AI setting does not and ever since the builder.ai setting it makes a lot less sense, if not for the simple reason that no one can clearly state where that billion plus went, oh and how many investments collapsed and who were those investors. Simple questions really.

Have a great day and try not to chase too many Edsel’s with your investment portfolio.

Leave a comment

Filed under Finance, IT, Media, Science

Curveballs

Sometimes life throws you a curveball, that is the simplicity of effects. It is a curveball as people cannot foresee them and in times it is because it comes from an unexpected side. There is basically nothing on this. You just have to accept it. Whether it was fate, karma if luck. These things happen. 

The subject of the ‘guilty’ party is Google (or Alphabet, whatever you want to call it) and the guilty person in this is Sergey Brin (now without a beard apparently. So yesterday I was handed two articles, they came basically out nowhere and appeared in my search finds. I am not even sure what I was looking for, but there you have it. 

First comes ZDNet with ‘I tried Google’s XR headset, and it already beats the Apple Vision Pro in 3 ways’, I don’t know about that as I never tried either, but as Apple seems to be sleeping at the wheel, lets see if Google can make something of this. You see, as Apple was asleep I created 2+ IP solutions for them, but will you know it, they are still seemingly asleep. 

The first one is seemingly the latest, but it was the first my mind created using the idea of partnering Guerrilla Games with Apple and it could just as easy be Google. I mentioned this in 

Are there two coins?’ (At https://lawlordtobe.com/2025/05/18/are-there-two-coins/) and optional setting that would give uniqueness and drive to the Apple Vision Pro, not that I really care as my nog tends to solve issues, like melting down Iranian/Russian nuclear reactors (a story for another time) I also created a stealth solution to make Iranian harbours useless for extended times. I cannot control my mind at times. But in this case I wondered what Apple could have done and I came up with several solutions that seemingly slipped their minds. The second set of IP was linked to Ubisoft and now we get to the second article. It was TechCrunch who gave me the second part with ‘Google launches AI tools for practicing languages through personalized lessons’ this seems fine, but in ‘One step left for a new world’ (at https://lawlordtobe.com/2024/11/16/one-step-left-for-a-new-world/), which I wrote on November 16th 2024 I gave the setting that Ubisoft with its Assassin’s Creed franchise had the ability to create language skills as they had already created over 80% and that was the hard part, now I see that the missing part has been created by Google and we get to see (at https://techcrunch.com/2025/04/29/google-launches-ai-tools-for-practicing-languages-through-personalized-lessons/) “Google on Tuesday is releasing three new AI experiments aimed at helping people learn to speak a new language in a more personalized way. While the experiments are still in the early stages, it’s possible that the company is looking to take on Duolingo with the help of Gemini, Google’s multimodal large language model.” So consider that AC Brotherhood could give you lessons in Italian and Latin, AC Unity could cover French and AC Syndicate could cover English. English could also be taught using Watchdogs 2 and 3 (Legion) there is of course Egyptian (AC Origin) and Arabic or Persian from AC Mirage. These games are ready and could be transferred to the XR headset making it even more personal and the kicker is that Apple had these options for over a year. Sucks being granny smith, doesn’t it? Oh, and if Google hadn’t done away with their Stadia they could have had at least 6 billion a year extra (phase one) and a lot more after that. Seems that they weren’t all awake either.

And all this was already on my blog site. As such there is a question where Apple gets its ideas, but in light of the failures I saw in 2024 I am not going to go there. Still if Google can do something more, they are happy to give it a go (a donation to yours truly would be perfectly acceptable).

Not the worst setting for today, but in a Few hours I am going to hand some dodo its liver, I feel a little frisky today. It’s not the weather, it wasn’t raining so I am decently fine. Have a great day.

Leave a comment

Filed under Finance, Gaming, IT, Media, Science

A swing and a miss

It is no secret that I hold the ‘possessors’ of AI at a distance. AI doesn’t exist (not yet at least) and now I got ‘informed’ through Twitter (still refusing to call it X) the following:

So after ‘Microsoft-backed Builder.ai collapsed after finding potentially bogus sales’ we get that the company is entering insolvency proceedings. Yet a mere three days ago TechCrunch gave us “Once worth over $1B, Microsoft-backed Builder.ai is running out of money”, so as such with a giggle on my mind I give you “Can’t have been a very good AI, can it?” So from +$1,000,000,000 to zilch (aka insolvency), how long did that take and where did the money go? So consider this, TechCrunch also gives us “The Microsoft-backed unicorn, which has raised more than $450 million in funding, rose to prominence for its AI-based platform that aimed to simplify the process of building apps and websites. According to the spokesperson, Builder.ai, also known as Engineer.ai Corporation, is appointing an administrator to “manage the company’s affairs.”” Now, I am going on a limb here. Consider that a billion will enable 1,000 programmers to work a year for a million dollars each. So where did the money go? I know that this doesn’t make sense (the 1000 programmers) but to consider that they might accept a deal for $200,000 each, there would be 5 years of designing and programming. Does that make sense? The website Builder.AI (my assumption that this is where they went gives us merely one line “For customer enquiries, please contact customers@builder.ai. For capacity partner enquiries, please contact capacitynetwork@builder.ai.” This is not good as I see it. The Register (at https://www.theregister.com/2025/05/21/builderai_insolvency/) gives us “The collapse of Builder.ai has cast fresh light on AI coding practices, despite the software company blaming its fall from grace on poor historical decision-making. Backed by Microsoft, Qatar’s sovereign wealth fund, and a host of venture capitalists, Britain-based Builder.ai rose rapidly to near-unicorn status as the startup’s valuation approached $1 billion (£740 million). The London company’s business model was to leverage AI tools to allow customers to design and create applications, although the Builder.ai team actually built the apps.

As such the headline of the Register is pretty much spot on “Builder.ai coded itself into a corner – now it’s bankrupt” You see coding yourself into a corner is not AI, it is people. People code and when you code yourself into a corner the gig is quite literally up. And I can go on all day as there is not AI. There is deeper Machine Language and there are LLM (Large Language Model) and the combination can be awesome and it is part of an actual AI, but it is not AI. As such as Microsoft is believing its own spin (yet again) we can confuse that there is now a setting that Qatar’s sovereign wealth fund, and a host of venture capitalists have pretty much lost their faith in Microsoft and that will have repercussions. It is basically that simple. The first part of resolving this is to acknowledge that there is no AI, there is a clear setting that the power of DML and LLM should not be dismissed as it is really powerful but it is not AI. 

As I personally see it, the LLM is setting a stage that the chess computers had in the late 80’s and early 90’s. They basically had every chess game ever played in their memory and that is how the chess computer could foresee what was possible thrown against it. And until 2002 when Chessmaster 9000 was released by Ubisoft, that was what it was and for that time it was awesome. I would never have been able to get as far as I did in chess without that program and I am speculatively seeing that unfold. A setting holding a billion parameters? So I ,might be wrong on this part, but that is what I see and we need to realise that the entire AI setting is spin from greedy salespeople that cannot explain what they are selling (thank god I am not a salesperson). I am technical support and I am customer care and what we see as ‘the hand of a clever person’ is not that, not even close. 

So as we are also given “Blue-chip investors poured in cash to the tune of more than $500 million. However, all was not well at the startup. The company was previously known as Engineer.ai, and attracted criticism after The Wall Street Journal revealed in 2019 that the startup used human engineers rather than AI for most of its coding work”, as such (again speculation) a simple trick to replay a mere 1800 days later. And this is what a lot are (plenty of them in a more clever way) but the show is now on Microsoft. They cracked this, so when they come with a “we were lured” or “it is more complex and the concept was looking really good” we should ask them a few hard questions. So whilst we are given “While the failure of startups, even one as high profile as Builder.ai, is not uncommon, the company’s reliance on AI tools to speed coding might give some users pause for thought.” And when we consider “might give some users pause for thought” is a rather nasty setting as I was there already years ago. So where the others? As such we should grill Satya Nadella on “Last month, Microsoft CEO Satya Nadella boasted that 30 percent of the code in some of the tech giant’s repositories was written by AI. As such, an observer cannot help but suspect some passive aggression is occurring here, where a developer has been told that the agent must be used, and so they are going to jolly well do it. After all, Nadella is not one to shy from layoffs.” As such I wonder when the stake holders for Microsoft will consider that the ‘USE BY’ date of Satya Nadella was only good until December 2024. But that is me merely speculating. So I wonder when the media and actual clever people in media are considering that this is a game thatch only be postponed and not won. So will the others run when the going gets tough, or will they hide behind “but everyone agrees on this” as such the individual bond will triumph and there is a lot of work out there. The need to explain to people (read: customers) is that there is a lot of good to be found in the DML and LLM combination. It remains a niche market and it will fill the markets when people cannot afford AI, because that setting will be expensive (when it is ready). These computers will be the things that IBM can afford, as can the larger players like an airline, Ford, LVMH (Louis Vuitton Moët Hennessy) and a few others. But the first 10 years it will remain out of the hands of some, unless they time share (pay per processor second) with anyone who has the option to afford one. That computer will need to work 80%+ of the time to be affordable. 

As such we will see a total amount of spin in the coming months, because Microsoft backed the wrong end of that equation and now the fires are coming to their feet. Less then. Less than an hour ago we were given ‘Microsoft Unveils AI Features for Windows 11 Tools’. I have no idea how they can fit this in, but I reckon that the media will avoid asking the questions that matter. As such we will have to wait the unfolding of the people behind builder.ai. I wonder if anyone will ask the specification off what happened to said billion dollars? Can we get a clear list please and where did the hardware end? Or was a mere server rack leased from Microsoft? This is just me having fun at present. 

So have a great day and I will sleep like a baby knowing that Microsoft swung and missed the ball by a fair bit. I reckon that this is…. Let’s see there was the Tablet, which they lost against Apple and now Huawei as well. There was the Gaming station, which was totally inferior against Sony. there was Azure (OK, it didn’t fail but a book vendor called Amazon has a much better product, there was the Browser, which is nowhere near as good as Google. And there are a few others, but they slipped my mind. So this is at least number 5, 6 if you count Huawei as a player as well. Not really that good for a company that is valued at 3.34 trillion. So how many failures will we witness until that is gone too? 

Have fun out there today.

Leave a comment

Filed under Finance, IT, Media, Science