Tag Archives: Large Language Models

IT said vs IT said

This is a setting we are about to enter. It was never rocket science, it was simplicity itself. And I mentioned it before, but now Forbes is also blowing the trumpet I mentioned in a clarion call in the past. The article (at https://www.forbes.com/councils/forbestechcouncil/2025/07/11/hallucination-insurance-why-publishers-must-re-evaluate-fact-checking/) gives us ‘Hallucination Insurance: Why Publishers Must Re-Evaluate Fact-Checking’ with “On May 20, readers of the Chicago Sun-Times discovered an unusual recommendation in their Sunday paper: a summer reading list featuring fifteen books—only five of which existed. The remaining titles were fabricated by an AI model.” We have seen these issues in the past. A Law firm stating cases that never existed is still my favourite at present. We get in continuation “Within hours, readers exposed the errors across the internet, sharply criticizing the newspaper’s credibility. This incident wasn’t merely embarrassing—it starkly highlighted the growing risks publishers face when AI-generated content isn’t rigorously verified.” We can focus on the setting about the high cost of AI errors, but as soon as the cost becomes too high, the staters of this error will get a Trump card and settle out of court, with the larger population being set in the dark on all other settings. But it goes into a nice direction “These missteps reinforce the reality that AI hallucinations and fact-checking failures are a growing, industry-wide problem. When editors fail to catch mistakes before publication, they leave readers to uncover the inaccuracies. Internal investigations ensue, editorial resources are diverted and public trust is significantly undermined.” You see, verification is key here and all of them are guilty. There is not one exception to this (as far as I can tell), there was a setting I wrote about this in 2023 in ‘Eric Winter is a god’ (at https://lawlordtobe.com/2023/07/05/eric-winter-is-a-god/) there on July 5th, I noticed a simple setting that Eric Winter (that famous guy from the Rookie) played a role in The Changeling (with the famous actor George C. Scott). The issue is two fold. The first is that Eric was less than 2 years old when the movie was made. The real person was Erick Vinther (playing a Young Man(uncredited)) This simple error is still all over Google, as I see it, only IMDB has the true story. This is a simple setting, errors happen, but in over 2 years that I reported it, no one fixed this. So consider that these errors creep into a massive bulk of data, personal data becomes inaccurate, and these errors will continue to seep into other systems. The fact that Eric Winter at some point sees his biography riddled with movies and other works where his memory fades under the guise of “Did I do this?”. And there will be more, as such verification becomes key and these errors will hamper multiple systems. And in this, I have some issues on the setting that Forbes paints. They give us “This exposes a critical editorial vulnerability: Human spot-checking alone is insufficient and not scalable for syndicated content. As the consequences of AI-driven errors become more visible, publishers should take a multi-layered approach” you see, as I see it, there is a larger setting with context checking. A near impossible setting. As people rely on granularity, the setting becomes a lot more oblique. A simple  example “Standard deviation is a measure of how spread out a set of values is, relative to the average (mean) of those values.” That is merely one version, the second one is “This refers to the error in a compass reading caused by magnetic interference from the vessel’s structure, equipment, or cargo.” 

Yet the version I learned in the 70’s is “Standard deviation, the offset between true north and magnetic north. This differs per year and the offset rotates in eastern direction in English it is called the compass deviation, in Dutch the Standard Deviation and that is the simple setting on how inaccuracies and confusions are entered in data settings (aka Meta Data) and that is where we go from bad to worse. And the Forbes article illuminates one side, but it also gives rise to the utter madness that this StarGate project will to some extent become. Data upon data and the lack of verification. 

As I see it, all these firms relying on ‘their’ version of AI and in the bowels of their data are clusters of data lacking any verification. The setting of data explodes in many directions and that lack works for me as I have cleaned data for the better pat of two decades. As I see it dozens of data entry firms are looking at a new golden age. Their assistance will be required on several levels. And if you doubt me, consider builder.ai, backed my none other than Microsoft and they were a billion dollar firm and in no time they had the expected value of zero. And after the fact we learn that 700 engineers were at the heart of builder.ai (no fault of Microsoft) but in this I wonder how Microsoft never saw this. And that is merely the start. 

We can go on on other firms and how they rely on ai for shipping and customer care and the larger setting that I speculatively predict is that people will try the stump the Amazon system. As such, what will it cost them in the end? Two days ago we were given ‘Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports’, so what will they end up saving when the data mismatches will happen? Because it will happen, it will happen to all. Because these systems are not AI, they are deeper machine learning systems optionally with LLM (Large Language Modules) parts and as AI are supposed to clear new data, they merely can work on data they have, verified data to be more precise and none of these systems are properly vetted and that will cost these companies dearly. I am speculating that the people fired on this premise might not be willing to return, making it an expensive sidestep to say the least. 

So don’t get me wrong, the Forbes article is excellent and you should read it. The end gives us “Regarding this final point, several effective tools already exist to help publishers implement scalable fact-checking, including Google Fact Check Explorer, Microsoft Recall, Full Fact AI, Logically Facts and Originality.ai Automated Fact Checker, the last of which is offered by my company.” So here we see the ‘Google Fact Check Explorer’, I do not know how far this goes, but as I showed you the setting with Eric Winter has been there for years and no correction was made. Even as IMDB doesn’t have this. I stated once before that movies should be checked against the age the actors (actresses too) had at the time of the making of the movie. And flag optional issues, in the case of Eric Winter a setting of ‘first film or TV series’ might have helped. And this is merely entertainment, the least of the data settings. So what do you think will happen when Adobe or IBM (mere examples) releases new versions and there is a glitch setting these versions in the data files? How many issues will occur then? I recollect that some programs had interfaces built to work together. Would you like to see the IT manager when that goes wrong? And it will not be one IT manager, it will be thousands of them. As I personally see it, I feel confident that there are massive gaps in the assumption of data safety of these companies. So as I introduced a term in the past namely NIP (Near Intelligent Parsing) and that is the setting that these companies need to fix on. Because there is a setting that even I cannot foresee in this. I know languages, but there is a rather large setting between systems and the systems that still use legacy data, the gaps in there are (for as much as I have seen data) decently massive and that implies inaccuracies to behold. 

I like the end of the Forbes article “Publishers shouldn’t blindly fear using AI to generate content; instead, they should proactively safeguard their credibility by ensuring claim verification. Hallucinations are a known challenge—but in 2025, there’s no justification for letting them reach the public.” It is a fair approach, but there is a rather large setting towards the field of knowledge where it is applied. You see, language is merely one side of that story, the setting of measurements. As I see it (using an example) “It represents the amount of work done when a force of one newton moves an object one meter in the direction of the force. One joule is also equivalent to one watt-second.” You see, cars and engineering use Joule in multiple ways, so what happens when the data shifts and values are missed? This is all engineer and corrector based and errors will get into the data. So what happens when lives are at stake? I am certain that this example goes a lot further than mere engineers. I reckon that similar settings exist in medical application, And who will oversee these verifications?

All good questions and I cannot give you an answer, because as I see it, there is no AI, merely NIP and some tools are fine with Deeper Machine Learning, but certain people seem to believe the spin they created and that is where the corpses will show up and more often than not in the most inconvenient times. 

But that might merely be me. Well time for me to get a few hours of snore time. I have to assassinate someone tomorrow and I want it too look good for the script it serves. I am a stickler for precision in those cases. Have a great day.

Leave a comment

Filed under Finance, IT, Media, Science

The size of that

Something no woman has ever sad to me, but that is for another day. You see, the story (at https://www.datacenterdynamics.com/en/news/saudi-arabias-ai-co-humain-looking-for-us-data-center-equity-partner-targets-66gw-by-2034-with-subsidized-electricity/) In this DCD ( Data Center Dynamics) gives us ‘Saudi Arabia’s AI co. Humain looking for US data center equity partner, targets 6.6GW by 2034 with subsidized electricity’ and they throw numbers at us. First there is the money “Plans $10bn venture fund to invest in AI companies”, which seems fair enough. But after that we get “The company said that it would buy 18,000 Nvidia GB300 chips with “several hundred thousand” more on the way, that it was partnering with AWS for a $5bn ‘AI Zone,’ signed a deal with AMD for 500MW of compute, and deployed Groq chips for inference.” I reckon that will split and split again, the shares of Nvidia. Then we get the $5 billion AI zone and then the AMD deal for 500MW of compute and deployed Groq chips for a conclusion reached on the basis of evidence and reasoning. Yes, that is quite the mouthful. After that we get a pause for the “How much of Humain’s data center focus will be on Saudi-based facilities is unclear – its AMD deal mentions sites in the US.” As such, we need to see what this is all about and I am hesitant to mention conclusions for a field that I am not aware of. Yet, the nagging feeling is in the back of my mind and it is jostling in an annoying way. You see, lets employ somewhat incorrect math (I know it is not a correct way). Consider 18,000 computers draining the energy net of 500 watt per system per second. That amounts to 9,000 GW energy (speculatively), and that is just the starting 18,000. As such the setting will be several times the amount needed for fueling these AI centers. Now, I know my calculations are widely of and we are given “At first, it plans to build a 50MW data center with 18,000 Nvidia GPUs for next year, increasing to 500MW in phases. It also has 2.3 square miles of land in the Eastern Province, which could host ten 200MW data centers.” I am not attacking this, but when we take into consideration that amount of energy requirements for processors, storage, cooling and maintaining the workflow my head comes up short (it usually does) and the immediate thought is where is this power coming from? As I see it, you will need a decently build Nuclear reactor and that reactor needs to be started in about 8 hours for that timeline to be met. Feel free to doubt me, I already am. Yet the needed energy to fuel a 66GW Data centre of any kind needs massive power support. And the need for Huawei to spice up the data cables somewhat. As I roughly see it, a center like that needs to plough through all the spam internet it gets on a near 10 seconds setting. That is all the spam it can muster in a year per minute (totally inaccurate, but you get the point). The setting that the world isn’t ready for this and it is given to us all in a mere paragraph. 

Now, I do not doubt the intent of the setting and the Kingdom of Saudi Arabia is really sincere to get to the ‘AI field’ as it is set, but at present the western setting is like what builder thought it would be and overreached (as I see it) and fraudulently set the stations of what they believed AI was and blew away a billion dollars in no time at all (and dragged Microsoft along with it) as they backed this venture. This gives me donut (which I already had) on the AI field as the AI field is more robust as I saw it (leaning on the learnings of Alan Turing) and it is a lot more robust then DML (Deeper Machine Learning) and LLM (Large Language Models), it really is. And for that I fear for the salespeople who tried to sell this concept, because when they say “Alas, it didn’t work. We tried, but we aren’t ready yet”, will be met with some swift justice in the halls of Saudi Arabia. Heads will roll intuit instance and they had that coming as I foresaw this a while before 2034. (It is 2025 now, and I am already on that page). 

Merely two years ago MIT Management gave us ‘Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI’ and there we get the thing I have warned about for years “In a widely discussed interview with The New York Times, Hinton said generative intelligence could spread misinformation and, eventually, threaten humanity.” I saw this coming a mile away (in 2020, I think) You see, these salespeople are so driven to their revenue slot that they forget about Data verification and data centers require and ACTUAL AI to drag trough the data verifying it all. This isn’t some ‘futuristic’ setting of what might be, it is a certainty that non-verified data breeds inaccuracies and we will get inaccuracy on inaccuracy making things go from bad to worse. So what does that look on a 66GW system? Well, for that we merely need to look back to the 80’s when the term GIGO was invented. It is a mere setting of ‘Garbage In, Garbage Out’ no hidden snags, no hidden loopholes. A simple setting that selling garbage as data leaves is with garbage, nothing more. As such as I saw it, I looked at the article and the throwing of large numbers and people thought “Oh yes, there is a job in there for me too” and I merely thought, what will fuel this? And band that, who can manage the see-through of the data and the verification process, because with those systems in place a simple act of sabotage by adding a random data set to the chain will have irreparable consequences in that data result. 

So, as the DCD set that, they pretty much end the setting with “By 2030, the company hopes to process seven percent of the globe’s training and inference workloads. For the facilities deployed in the kingdom, Riyadh will subsidize electricity prices.” And in this my thoughts are Where is that energy coming from?” A simple setting which comes with (a largely speculative setting) that such a reactor needs to be a Generation IV reactor, which doesn’t exist yet. And in this the World Nuclear Association in 2015 suggested that some might enter commercial operation before 2030 (exact date unknown), yet some years ago we were given that the active member era were “Australia, Canada, China, the European Atomic Energy Community (Euratom), France, Japan, Russia, South Africa, South Korea, Switzerland, the United Kingdom and the United States” there is no mention of the Kingdom of Saudi Arabia and I reckon they would be presenting all kinds of voices against the Kingdom of Saudi Arabia (as well as the UAE) being the first to have one of those. It is my merely speculative nature to voice this. I am not saying that the Economic Simplified Boiling Water Reactor (ESBWR) is a passively safe generation III+ reactor could not do this, but the largest one is being build by Hitachi (a mere 4500MW) and it is not build yet. The NRC granted design approval in September 2014, and it is currently not build yet. That path started in 2011. It is 2025 now, so how long until the KSA gets its reactor? And perhaps that is not needed for my thoughts, but we see a lot of throwing of numbers, yet the DCD kept us completely in the dark on the power requirements. And as I see it the line “Riyadh will subsidize electricity prices” does not hold water as the required energy settings are not given to us (perhaps not so sexy and it does make for a lousy telethon) 

So I am personally left with questions. How about you? Have a great day and drink some irradiated tea. Makes you glow in the dark, which is good for visibility on the road and sequential traffic safety.

Leave a comment

Filed under Finance, IT, Media, Politics

A swing and a miss

It is no secret that I hold the ‘possessors’ of AI at a distance. AI doesn’t exist (not yet at least) and now I got ‘informed’ through Twitter (still refusing to call it X) the following:

So after ‘Microsoft-backed Builder.ai collapsed after finding potentially bogus sales’ we get that the company is entering insolvency proceedings. Yet a mere three days ago TechCrunch gave us “Once worth over $1B, Microsoft-backed Builder.ai is running out of money”, so as such with a giggle on my mind I give you “Can’t have been a very good AI, can it?” So from +$1,000,000,000 to zilch (aka insolvency), how long did that take and where did the money go? So consider this, TechCrunch also gives us “The Microsoft-backed unicorn, which has raised more than $450 million in funding, rose to prominence for its AI-based platform that aimed to simplify the process of building apps and websites. According to the spokesperson, Builder.ai, also known as Engineer.ai Corporation, is appointing an administrator to “manage the company’s affairs.”” Now, I am going on a limb here. Consider that a billion will enable 1,000 programmers to work a year for a million dollars each. So where did the money go? I know that this doesn’t make sense (the 1000 programmers) but to consider that they might accept a deal for $200,000 each, there would be 5 years of designing and programming. Does that make sense? The website Builder.AI (my assumption that this is where they went gives us merely one line “For customer enquiries, please contact customers@builder.ai. For capacity partner enquiries, please contact capacitynetwork@builder.ai.” This is not good as I see it. The Register (at https://www.theregister.com/2025/05/21/builderai_insolvency/) gives us “The collapse of Builder.ai has cast fresh light on AI coding practices, despite the software company blaming its fall from grace on poor historical decision-making. Backed by Microsoft, Qatar’s sovereign wealth fund, and a host of venture capitalists, Britain-based Builder.ai rose rapidly to near-unicorn status as the startup’s valuation approached $1 billion (£740 million). The London company’s business model was to leverage AI tools to allow customers to design and create applications, although the Builder.ai team actually built the apps.

As such the headline of the Register is pretty much spot on “Builder.ai coded itself into a corner – now it’s bankrupt” You see coding yourself into a corner is not AI, it is people. People code and when you code yourself into a corner the gig is quite literally up. And I can go on all day as there is not AI. There is deeper Machine Language and there are LLM (Large Language Model) and the combination can be awesome and it is part of an actual AI, but it is not AI. As such as Microsoft is believing its own spin (yet again) we can confuse that there is now a setting that Qatar’s sovereign wealth fund, and a host of venture capitalists have pretty much lost their faith in Microsoft and that will have repercussions. It is basically that simple. The first part of resolving this is to acknowledge that there is no AI, there is a clear setting that the power of DML and LLM should not be dismissed as it is really powerful but it is not AI. 

As I personally see it, the LLM is setting a stage that the chess computers had in the late 80’s and early 90’s. They basically had every chess game ever played in their memory and that is how the chess computer could foresee what was possible thrown against it. And until 2002 when Chessmaster 9000 was released by Ubisoft, that was what it was and for that time it was awesome. I would never have been able to get as far as I did in chess without that program and I am speculatively seeing that unfold. A setting holding a billion parameters? So I ,might be wrong on this part, but that is what I see and we need to realise that the entire AI setting is spin from greedy salespeople that cannot explain what they are selling (thank god I am not a salesperson). I am technical support and I am customer care and what we see as ‘the hand of a clever person’ is not that, not even close. 

So as we are also given “Blue-chip investors poured in cash to the tune of more than $500 million. However, all was not well at the startup. The company was previously known as Engineer.ai, and attracted criticism after The Wall Street Journal revealed in 2019 that the startup used human engineers rather than AI for most of its coding work”, as such (again speculation) a simple trick to replay a mere 1800 days later. And this is what a lot are (plenty of them in a more clever way) but the show is now on Microsoft. They cracked this, so when they come with a “we were lured” or “it is more complex and the concept was looking really good” we should ask them a few hard questions. So whilst we are given “While the failure of startups, even one as high profile as Builder.ai, is not uncommon, the company’s reliance on AI tools to speed coding might give some users pause for thought.” And when we consider “might give some users pause for thought” is a rather nasty setting as I was there already years ago. So where the others? As such we should grill Satya Nadella on “Last month, Microsoft CEO Satya Nadella boasted that 30 percent of the code in some of the tech giant’s repositories was written by AI. As such, an observer cannot help but suspect some passive aggression is occurring here, where a developer has been told that the agent must be used, and so they are going to jolly well do it. After all, Nadella is not one to shy from layoffs.” As such I wonder when the stake holders for Microsoft will consider that the ‘USE BY’ date of Satya Nadella was only good until December 2024. But that is me merely speculating. So I wonder when the media and actual clever people in media are considering that this is a game thatch only be postponed and not won. So will the others run when the going gets tough, or will they hide behind “but everyone agrees on this” as such the individual bond will triumph and there is a lot of work out there. The need to explain to people (read: customers) is that there is a lot of good to be found in the DML and LLM combination. It remains a niche market and it will fill the markets when people cannot afford AI, because that setting will be expensive (when it is ready). These computers will be the things that IBM can afford, as can the larger players like an airline, Ford, LVMH (Louis Vuitton Moët Hennessy) and a few others. But the first 10 years it will remain out of the hands of some, unless they time share (pay per processor second) with anyone who has the option to afford one. That computer will need to work 80%+ of the time to be affordable. 

As such we will see a total amount of spin in the coming months, because Microsoft backed the wrong end of that equation and now the fires are coming to their feet. Less then. Less than an hour ago we were given ‘Microsoft Unveils AI Features for Windows 11 Tools’. I have no idea how they can fit this in, but I reckon that the media will avoid asking the questions that matter. As such we will have to wait the unfolding of the people behind builder.ai. I wonder if anyone will ask the specification off what happened to said billion dollars? Can we get a clear list please and where did the hardware end? Or was a mere server rack leased from Microsoft? This is just me having fun at present. 

So have a great day and I will sleep like a baby knowing that Microsoft swung and missed the ball by a fair bit. I reckon that this is…. Let’s see there was the Tablet, which they lost against Apple and now Huawei as well. There was the Gaming station, which was totally inferior against Sony. there was Azure (OK, it didn’t fail but a book vendor called Amazon has a much better product, there was the Browser, which is nowhere near as good as Google. And there are a few others, but they slipped my mind. So this is at least number 5, 6 if you count Huawei as a player as well. Not really that good for a company that is valued at 3.34 trillion. So how many failures will we witness until that is gone too? 

Have fun out there today.

Leave a comment

Filed under Finance, IT, Media, Science

The losing bet

That happens, we make bets. We all do in one way or another. Some merely hurt our pride and/or our ego. Some deals hurt others and there are other settings, too many to mention. But Reuters alerted me three hours ago on a deal that will have a lot of repercussions. The article ‘US clears export of advanced AI chips to UAE under Microsoft deal, Axios says’ (at https://www.reuters.com/technology/artificial-intelligence/advanced-ai-chips-cleared-export-uae-under-microsoft-deal-axios-reports-2024-12-07/) is one that has a few more repercussions than you imagined it had. The global loser (Microsoft) has set up a setting where we see “The U.S. government has approved the export of advanced artificial intelligence chips to a Microsoft-operated facility in the United Arab Emirates as part of the company’s highly-scrutinised partnership with Emirati AI firm G42, Axios reported on Saturday, citing two people familiar with the deal.” Microsoft is as desperate as I think they are with this deal. They probably pushed the anti-China agenda and made mention of the $1.5 billion dollar investment deal. And as we are given “The deal, however, was scrutinised after U.S. lawmakers raised concerns G42 could transfer powerful U.S. AI technology to China. They asked for a U.S. assessment of G42’s ties to the Chinese Communist Party, military and government before the Microsoft deal advances.” And we are also given “The approved export license requires Microsoft to prevent access to its facility in the UAE by personnel who are from nations under U.S. arms embargoes or who are on the U.S. Bureau of Industry and Security’s Entity List, the Axios report said.” In this I have a few issues.

In the first there is no AI, not yet anyway as such the investment is going the way like water under a bridge. Microsoft knows this as such they are betting big and they have the US government backing them. In the worst case it will be the US government putting up the $1.5 billion themselves and with the anti-China sentiment that is a likely result from this.

In the second the setting that Microsoft is banking on is a loop setting with multiple exists. Yesterday the Financial Times informed us ‘OpenAI seeks to unlock investment by ditching ‘AGI’ clause with Microsoft’ (at https://www.ft.com/content/2c14b89c-f363-4c2a-9dfc-13023b6bce65) the events are piling up and as I see it Microsoft is on the edge if desperation. You see, it all hangs on the simplest setting that there is no AI (not yet at least). What we have is a setting with LLM’s and Deeper Machine Learning and it is clever and it is a ‘optional’ wholesome solution to a lot of paths. But it is no Artificial Intelligence. You see, as all the laws are part of ethics and ‘AI’ people look around and think that there is ‘awareness’ of solutions. There are not. It is all data managed, a somewhat clever solution to people seeking an aware-like solution in data and some kind of knowledge discovery mode. It all could be clever, but it is still no AI and at some point certain people will dig it out and I reckon the UAE will be ahead of it all. Microsoft and its Ferengi approach of ‘When you get their money you never give it back’ comes with nice loopholes. You think that Microsoft made the ‘investment’ now here is the cracker. There is nothing stopping Microsoft of putting it in a ‘bad bank’ approach and make it all tax deductible and then some. And when the “artificial general intelligence” (AGI) clause is dropped there will be all kinds of attention from all over the place and no one is looking at the details of whatever they consider AI and what Alan Turing clearly considered to be AI. When the people that matter start looking and digging the days of Microsoft will be numbered. Another bubble game created and now that they have ‘enticed’ the wrong kind of people they will want their pound of dollars. And as we are given “The Biden administration in October required the makers of the largest AI systems to share details about them with the U.S. government. G42 earlier this year said it was actively working with U.S. partners and the UAE’s government to comply with AI development and deployment standards, amid concerns about its ties to China.” And in that setting Microsoft decided to be the governmental bitch to say the least. And all these media moguls are so loosely playing along and what will happen when someone digs into this. They will play dumb and say “We didn’t comprehend the technology” and it wasn’t hard. I saw it months ago, if not nearly almost two years ago. And the media was stupid? No, the media goes the way of the digital dollar, the way of the emotional flame. So as the field opens, we see all kinds of turmoil with Microsoft claiming to be the ‘saviour’ all nice and kind (of a sort), but when you look at the setting, it is my personal speculated feeling that Microsoft wouldn’t have made this move unless they had very little moves left. And in this setting the one player is forgotten. China, how far along are their ‘designs’? And in all this what are their plans? We seem to be given the setting that it is all American, but as the media cannot be trusted what is the ACTUAL setting? I have no clue, but in a world this interactive, China cannot be far away. 

And if there are people who disagree, that is fair, but the actual setting is largely unknown. So when we get to the last paragraph which gives us “Abu Dhabi sovereign wealth fund Mubadala Investment Company, the UAE’s ruling family and U.S. private equity firm Silver Lake hold stakes in G42. The company’s chairman, Sheikh Tahnoon bin Zayed Al Nahyan, is the UAE’s national security advisor and the brother of the UAE’s president.” Consider this small fact. Microsoft seems to be ‘investing’ all whilst the anti-China rhetoric is given. Do you think that anyone who is the National Security Advisor (of the UAE) hasn’t seen through a lot of this? So what was the plan from Microsoft? I am at a loss, but with the AI setting the way it actually is none of this makes sense. Do they really believe that Microsoft is any kind of solution in this setting? Simply look at the accusation that Microsoft has also been criticised for the perceived declining quality and reliability of its software. That is your partner in so-called AI? Just a thought to consider.

Well, you all have a lovely Sunday. My Monday is a mere 80 minutes away.

1 Comment

Filed under Finance, IT, Media, Science