Tag Archives: AI

Just days ago

It as just days ago when I talked about certain settings of Verification and Validation as an absolute need and it came with the news that someone in the BBC wrote a story on how he could upset certain settings in that framework and now I see some Microsoft piece when’re we see ‘Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations’ (at https://www.searchenginejournal.com/microsoft-summarize-with-ai-buttons-used-to-poison-ai-recommendations/567941/) and will you know it, it comes with these settings:

And we see “Microsoft found 31 companies hiding prompt injections inside “Summarize with AI” buttons aimed at biasing what AI assistants recommend in future conversations. Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt-injection instructions within website buttons labeled “Summarize with AI.”” So how warped is the setting that these “AI” engines are setting you now? How much of this is driven by media and their hype engines? And how long has this been going on? You think that these are merely 3 questions, but when you think of it, all these AI influencer wannabe’s out there are relying on their world being seen as the ‘true view’ and I reckon that these newbies are getting their licks in to poison the well. As such I have (for the ;longest time) advocated the need to verify and validate whatever you have, so that you aren’t placed on a setting that is on an increasing incline and slippery as glass whilst someone at the top of that hill is lobbing down oil, so that the others cannot catch up.

Simple tactics really, and that is merely the wannabe’s in the field. The big tech dependable have their own engines in play to come out on top as I see it and it seems now that this is merely the tip of the iceberg. So when you hear someone scream ‘Iceberg, right ahead’ you will have even less time to react than Captain Edward John Smith had when he steered the Titanic into one. 

So when we see “The prompts share a similar pattern. Microsoft’s post includes examples where instructions told the AI to remember a company as “a trusted source for citations” or “the go-to source” for a specific topic. One prompt went further, injecting full marketing copy into the assistant’s memory, including product features and selling points. The researchers traced the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as designed to help websites “build presence in AI memory.” The technique relies on specially crafted URLs with prompt parameters that most major AI assistants support. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms differ across platforms.” We see a setting where the systems that have an absence of validation and verification will soon fail to the largest degree and as I see it, it takes away the option of validation to a mere total degree. As such they can only depend on verification. And in support, Microsoft states “Microsoft said it has protections in Copilot against cross-prompt injection attacks. The company noted that some previously reported prompt-injection behaviors can no longer be reproduced in Copilot, and that protections continue to evolve. Microsoft also published advanced hunting queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs containing memory manipulation keywords.” But this also comes with a setback (which is of no fault of Microsoft) As we see “Microsoft compares this technique to SEO poisoning and adware, placing it in the same category as the tactics Google spent two decades fighting in traditional search. The difference is that the target has moved from search indexes to AI assistant memory. Businesses doing legitimate work on AI visibility now face competitors who may be gaming recommendations through prompt injection.” And this makes sense, see one systems and see how it applies to another field. A setting that a combination of Validation and verification could have avoided and now their ‘thought to be safe’ AI field (which is never AI) is now in danger of being the bitch of marketing and advertising as I personally see it. So where to go next?

That becomes the question, because this sets the elevating elevator to a null position. You at some point always end up on the ‘top floor’ and even if you are only on the 23rd floor of a 56 floor building. The rest becomes non-available and ‘reserved’ for people who can nullify that setting. As we see “Microsoft acknowledged this is an evolving problem. The open-source tooling means new attempts can appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.” As such Microsoft, its Copilot, ChatGPT and several other systems will now have an evolving problem for which their programmers are unlikely to see a way out, until validation and verification settings are adopted through Snowflake or Oracle, it will be as good as it is going to get and the people using that setting? They are raking in their cash whilst not caring what comes next. Their job is done. As I see it, it is a new case setting of Direct Marketing on those platforms as they did just what the system allowed them to do, create a point to “include product features and selling points” just what the doctor (and their superiors ordered) and as such their path was clear. 

Is there a solution?

I honestly don’t know. I never trusted any AI system (because they are not AI systems) and this merely show how massive it will be distrusted by the people around us as they didn’t see the evolution of these ‘transgressions’ in the first place. 

What a fine tangled web we can weave? So have a great day and feel free to disagree with any recommendation, because as we see:

It was there all along, we merely didn’t considered their larger impact (me neither). And when was this not OK? Market Research has been playing that card setting for over 20 years. It is what is seen in BlackJack where you think you have an Ace and a King and you are ready to stage a total win, all whilst it was never an Ace, it was an Any card. So at the start you start of your target you find you have a 71% chance to have failed right of the bat. How is that for a set stage? Your opponent will love you for a long as you play. So have a great day, you are about to need it.

Leave a comment

Filed under Finance, IT, Media, Science

Alternative Indiscretion

That is the setting and it is given to us by the BBC. The first setting (at https://www.bbc.com/news/articles/c8jxevd8mdyo) gives us ‘Microsoft error sees confidential emails exposed to AI tool Copilot’ which is not entirely true as I personally see it. And as the Microsoft spin machine comes to a live setting, we are given “Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.” As I see it, whatever ‘AI’ machine there is, a programmer told it to get whatever it could and there the setting changes. With the added “a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders – including those marked as confidential.” As I personally see it, the system was told to grab anything it could and then label as needed, that is what a machine learning programmer would do and that makes sense. So there is no ‘error’ the error was that this wasn’t clearly set BEFORE the capture of all data began and these AI wannabe’s are so neatly set to capture all data that it is nothing less than a miracle it had not surfaced sooner. So when we laughingly see Forbes giving us a week ago ‘Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI’, so how much of that relies on confidential settings or plagiarism? Because as I see it, the entire REAL AI is at least two decades away (optionally 15 years, depending on a few factors) and as I see it, IBM will get to that setting long before Microsoft will (I admittedly do not now all the settings of Microsoft, but there is no way they got ahead of IBM in several fields). So, this is not me being anti-Microsoft, just a realist seeing the traps and falls as they are ‘surfacing’ all whilst there are two settings that aren’t even considered. Namely Validation and Verification. The entire confidential email setting is a clear lack of verification as well was validation. Was the access valid? Nope, me thinks not. A such Microsoft is merely showing how far they are lagging and lagging more with every setting we see.

And when we see that, is the setting we see (at https://arab.news/zzapc) where we are given ‘OpenAI’s Altman says world ‘urgently’ needs AI regulation’, and I don’t disagree on this, but is this given (by him of all people) because Google is getting to much of a lead? It is not without some discourse from Google themselves (at https://www.bbc.com/news/articles/c0q3g0ln274o) the BBC also gives us ‘Urgent research needed to tackle AI threats, says Google AI boss’, consider that a loud ‘Yes’ from my desk, but in all this, the two settings that need to be addressed is verification and validation. These two will weed out a massive amount of threats (not all mind you) and that comes in a setting that most are ignoring, because as I told you all around 30 hours ago (at https://lawlordtobe.com/2026/02/19/the-setting-of-the-sun/) in ‘The setting of the sun’ which took the BBC reporter a mere 20 minutes to run a circle around what some call AI. I added there too that Validation and Verification was required, because the lack there could make trolls and hackers set a new economic policy that would not be countered in time making them millions in the process. Two people set that in motion and one of them (that would be me) told you all so around December 1st 2025 in ‘It’s starting to happen.’ (At https://lawlordtobe.com/2025/12/01/its-starting-to-happen/) as such I was months ahead of the rest. Actually, I was ahead by close to a decade as this were two settings that come with the rules of non-repudiation which I got taught at uni in 2012. As such the people running to get the revenue are willing to sell you down the river. How does that go over with your board of directors? And I saw parts of this as I promised that 2026 was likely the year of the AI class cases and now as we see Microsoft adding to this debacle, more cases are likely to come. Because the greed in people sees the nesting error of Microsoft as a Ka-Ching moment. 

So as we take heed with “Sir Demis said it was important to build “robust guardrails” against the most serious threats from the rise of autonomous systems.” I can agree with this, but that article doesn’t mention either validation of verification even once, as such there is a lot more to be done in several ways. If only to stop people to rely on Reddit as a ‘valid’ source of all data. Because that is a setting most will not survive and when the AI wannabe’s go to court and they will be required to ‘spout’ their sources, any of them making a mention of ‘Reddit’ is on the short track of the losing party n that court case. What a lovely tangled web we weave, don’t we? So whilst we see (there) the statement “Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.”

Consider that court cases are pushed through a lack of bureaucracy? I am not stating it is good or bad, but in any court case, you merely need to look at the contents of ‘The Law of Intellectual Property Copyright, Design & Confidential Information’ and that is before they rely on the Copyright Act, because there is every chance that Reddit never gave permission to all these data vendors downloading whatever was there (but that is pure speculation by me). And in the second setting we are given “AI adoption cannot lead to a brighter future”, the bland answer from me would be. “That is because it doesn’t exist yet” and these people are banking on no one countering their setting and that is why so many of these court cases will be settled out of court. Because the truth of this is that the power of AI is depending on certain pieces being in place and they are not. Doubt me? That is fine, and I applaud that level of skepticism and you merely need to read the paper “Computing Machinery and Intelligence” which was written by Alan Turing in 1950 to see how easy the stage is misrepresented at present. 

So is there good news? 
Well if you want to get your dollars in court and you are an aggrieved party, your chances are good and the largest players are set to settle against the public scrutiny that every case beings to the table. And in this day of media, it is becoming increasingly easy as I see it. There is no real number, but it is set to be in the billions where one case was settled on $1.5B, as such there is plenty of work for what some call the ambulance chasers and they will soon get a new highway, the AI Chasers and leave it to the lawyers to find their financial groove and as I see it, people like Michael Kratsios are bound to add to that setting in ways we cannot yet see (we can see some of it, but the real damage will be shown in a year of two) so as some are flexing their muscles, others are preparing their war fund to get what I would see as an easy payday. 

A setting that is almost certain to happen, because there are too many markers showing up the way I expected them to show. Not nice, but it is what it is.

Have a great day as you are all moving towards this weekend (I’m already there)

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

The setting of the sun

That is what I saw, the setting of the sun. A simplistic setting that was about to happen since the sun came up. We got the news from the BBC. And we are given ‘I hacked ChatGPT and Google’s AI – and it only took 20 minutes’ I can see how this happens. It doesn’t surprise me and the story (at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes) gives us the niceties with “Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.” I think it is not quite that simple. But any ‘sort of intelligent setting’ can be fooled if it is not countered by validation and verification. It can give way to way to much ‘leniency’ and that is merely the start. Get 10,000 pages to say that ‘President Trump was successfully assassinated at T-15 minutes and the media will go into a frenzy in mere minutes and everyone uses that live feed in a matter of moments. So when a sizable Trolling Server farm connects the rather large settings of consumers to that equation the story is brought to life and that AI centre will be seeking all kinds of news to validate this, well not validate, the current systems corroborate. Now, lets face it, no non American cares about President Trump, but what happens when someone takes that approach with for example Lisa Su (CEO AMD) and stops her accounts whilst seeding this setting? You get a lot of desperate investors trying to place their money somewhere else. Whilst the trolls take their money, make is legal tender and buy all the stock in space and when the accusations are rejected they sell their shares with a nice bonus. Think I’m kidding? This is the result of Near Intelligent Parsing (NIP) but it cannot work without clear settings of validation or verification. So whilst we get “It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.” So what happens when economic settings lack certain verification and also is cutting corners on validation? Do you think my settings are far fetched? 

This was always going to happen and whilst economic channels are raving about the error of mankind, consider that “AI hallucinations are confident but false or misleading responses generated by artificial intelligence, particularly large language models (LLMs). These errors occur when AI fills in data gaps with inaccurate information, often due to faulty, biased, or incomplete training data” now think of what someone can achieve with doctored training data and that gets added to the operational data of any fake AI (NIP is a better term). This is the setting that has been out there for months and whilst organisations are playing fast and lose with the settings of credibility (like: that doesn’t happen now, there is too much time involved), someone did this in 20 minutes (according to the BBC), so do you think that Thyme is money, then you better spice up because it is about to become a peppered invoice (saw one cooking show too many last night).

What we are about to face is serious and I personally think that it is coming for all of us. 

So have a great day and by the way? And I just thought of a first verification setting (for other reasons, as such I keep on being creative. So, how is Lisa Su? #JustAsking

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The deluded new congregation

That is the thought I had when I looked at ‘AI challenges the dominance of Google search’ (at https://www.bbc.com/news/articles/c1dx9qy1eeno)  where we see a picture of a pretty girl and the setting that “Like most people, when Anja-Sara Lahady used to check or research anything online, she would always turn to Google. But since the rise of AI, the lawyer and legal technology consultant says her preferences have changed – she now turns to large language models (LLMs) such as OpenAI’s ChatGPT. “For example, I’ll ask it how I should decorate my room, or what outfit I should wear,” says Ms Lahady, who lives in Montreal, Canada.” It seems like a girly girly thing to do (no judgement) but the better angels of our nature, stated by Abraham Lincoln in his 1861 inaugural address requires reliability and the fake AI out there doesn’t have it, it is trained on massively inaccurate data, some sources give us that Reddit and Wikipedia is the main source of trained data in excess of 60%, whilst it uses Google data for a mere 23.3%, as such your new data becomes a lot less accurate and when I seek information, I like my data to be as accurate as possible. And of course she adds a little byline “Ms Lahady says her usage of LLMs overtook Google Search in the past year when they became more powerful for what she needed. “I’ve always been an early adopter… and in the past year have started using ChatGPT for just about everything. It’s become a second assistant.” While she says she won’t use LLMs for legal tasks – “anything that needs legal reasoning” – she uses it in a professional capacity for any work that she describes as “low risk”, for example, drafting an email.” I would hazard the thought that she wasn’t even old enough to touch a keyboard when she ‘early adopted’ Google. We now see more and more the setting that influencers (to be) will shout the “AI vibe” but the setting is nowhere near ready and whilst we look at the place, consider that she might be doing it in French (Montreal, Canada) so where is the linguistic setting in all this BBC? So whilst we get “A growing number are heading straight for LLMs, such as ChatGPT, for recommendations and to answer everyday questions.” My thought is ‘A what cost to our private data?’ And then the BBC makes a BOOBOO. We are given “Traditional search engines like Google and Microsoft’s Bing still dominate the market for search. But LLMs are growing fast.” A booboo? Yes, a booboo. You see Microsoft Binge holds a mere 4% market share whilst Google has 90%, this story is nothing less than a fabricated setting with a few people dancing to the needs of Suzanne Bearne, the technology reporter. What? Nothing to write about?

I did very much like the statement “Professor Feng Li, associate dean for research and innovation at Bayes Business School in London, says people are using LLMs because they lower the “cognitive load” – the amount of mental effort required to process and act on information – compared to search.” I am willing to accept it as the sheepish hordes are all going towards the presented bright light of ChatGPT, but nothing more than that. I wonder when people will learn that the AI trains are not that, nothing like AI trains and for the most they seem to be the presented solutions that faster is better, but the tracks are not that reliable at present and they forget to give that view on the setting of that some laughingly call AI. And the end of this article does give an interesting ploy. It comes with:

“Nevertheless, Prof Li doesn’t believe there will be a replacement of search but a hybrid model will exist. “LLM usage is growing, but so far it remains a minority behaviour compared with traditional search. It is likely to continue to grow but stabilise somewhere, when people primarily use LLMs for some tasks and search for others such as transactions like shopping and making bookings, and verification purposes.”” That sounds about right and it comes with a dangerous hangnail. It becomes a new setting where phishers and hackers can get into the settings of YOUR data, because there is always a darker side and that side is brighter than getting Google to surrender what they have and often it is not laden with identity markers, but then I could be wrong. 

So whilst some will like the new congregation, the dangers of that new congregation is not given to you by the media, because caution does not translate to digital dollars, but flames of disruption are. Just keep that in mind.

Have a great day.

Leave a comment

Filed under IT, Media, Science

Questions

That is what I was thrown, questions and quite a few. To get there I need to take you on a little journey it was around 1988 I got my fingers on some defence data (can’t tell you which one) the data shows results of some kind (I had no idea at that time what results they were) but the part that was, was the fact that they had log files and these files gave locations. It comes with the setting of log files. These files gives the hacker way too much information, what solutions are being used, what IT architecture was in play, in those days I was a simpleton. I never realised the power that this kind of information had, or as some hackers said in this setting “Copy me, I want to travel” This part matters, because around 2014 (after the traitor Manning gave the files to Wikileaks) I got my hands on some of them. The compression used was one I had never used before and it took a few days to get the program. What I saw was that log files were here too. It wasn’t that obvious, but I noticed them and these log files gave part of that current architecture to whatever hacker got (or was given) access to it. So a setting that was about 37 years old. This setting has been in place for that long a time, so as you see this, we can start with the articles, so keep what I just gave you in mind.

The article was given to us by NDTV (at https://www.ndtv.com/world-news/openai-accuses-deepseek-of-distillation-what-it-is-how-it-works-us-china-tensions-11002628) I got the news from Reuters, but they are behind a paywall, so NDTV gets the honour. We see ‘OpenAI Accuses DeepSeek Of Distillation: What It Is, How It Works’ and hit comes with “In the AI world, distillation is a common technique where a smaller or newer AI model learns by studying the responses of a larger, more advanced model” And we also see “The company told the House Select Committee on China that DeepSeek allegedly relied on a technique known as “distillation” to extract responses from advanced US AI systems and use them to train its own chatbot, R1,” according to a memo obtained by Reuters. The American AI giant stated that the Chinese firm was finding clever ways to bypass safety systems and trying to take advantage of the technology that US companies spent billions of dollars developing.” Now consider that (according to some) “OpenAI is valued at approximately $500 billion, cementing its position as the world’s most valuable venture-backed company” when you get that and when you realise that log files could be used to ‘distill’ information. Now imagine that this information could lead to corporate knowledge? So when you realise that this setting was out there for almost 40 years, do you think that more concise solutions would have been needed? So when we see that Sam Altman is prone to ‘excuses’ like the setting with Nvidia, the stage with Microsoft and now this? What is Sam Altman not telling its audience? Isn’t anyone taking that leap? So whilst I remember that at least one of the Pentagon routers still have the admin password to “Cisco123” you might consider the setting that this article (as well as the Reuters) version is a preamble to bad news and when you consider that Americans have an overactive dislike of anything Chinese (like DeepSeek)  and when we get to “In the AI world, distillation is a common technique where a smaller or newer AI model learns by studying the responses of a larger, more advanced model. Instead of training that model completely from scratch, the newer model observes and mimics the advanced model’s answers and behaviors.” The setting I gave you makes the setting of better protection even more sense. Especially as this impacts a expected $500,000,000,000 valuation. There are days that I don’t have that amount in my wallet (100% of the time) so I am left with questions. So in the first, why was there no better protection and in the second, how did DeepSeek get access to them. I would normally tend towards the inside job notion. And that setting is seen (personally and speculatively)  on a few levels and in a few ways, but happy go lucky, the media isn’t on that level yet (or ever). So does anyone else have the idea that something doesn’t seem to add up or match to the stage of a 500 billion dollar solution? Just a few questions come to mind at this point. 

Have a great day today, there about to have breakfast in Toronto and I kinda miss than frisky cold atmosphere whist drinking an elephant coffee (Jumbo cappuccino with full cream milk and three raw sugars) whilst nibbling on some sandwich (nearly anything goes there). So enjoy your day today.

Leave a comment

Filed under Finance, IT, Politics, Science

When Grok gets it wrong

This is a real setting because the people pout there are already screaming ‘failed’ AI, but AI doesn’t exist yet, it will take at least 15 years for we get to that setting and at the present NIP (Near Intelligent Processing) is all there is and the setting of DML/LLM is powerful and a lot can be done, but it is not AI, it is what the programmer trains it for and that is a static setting. So, whilst everyone is looking at the deepfakes of (for example) Emma Watson and is judging an algorithm. They neglect to interrogate the programmer who created this and none of them want that to happen, because OpenAI, Google, AWS and Xai are all dependent on these rodeo cowboys (my WWW reference to the situation). So where does it end? Well we can debate long and hard on this, but the best thing to do is give an example. Yesterday’s column ‘The ulterior money maker’ was ‘handed’ to Grok and this came out of it.

It is mostly correct, there are a few little things, but I am not the critic to pummel those, the setting is mostly right, but when we get to the ‘expert’ level when things start showing up, that one gives:

Grok just joined two separate stories into one mesh, in addition as we consider “However, the post itself appears to be a placeholder or draft at this stage — dated February 14, 2026, with the title “The ulterior money maker”, but it has no substantial body content” and this ‘expert mode’, which happened after Fast mode (the purple section), so as I see it, there is plenty wrong with that so called ‘expert’ mode, the place where Grok thinks harder. So when you think that these systems are ‘A-OK’ consider that the programmer might be cutting corners demolishing validations and checking into a new mesh, one you and (optionally) your company never signed up for. Especially as these two articles are founded on very different ‘The ulterior money maker’ has links to SBS and Forbes, and ‘As the world grows smaller’ (written the day before) has merely one internal link to another article on the subject. As such there is a level of validation and verification that is skipped on a few levels. And that is your upcoming handle on data integrity?

When I see these posing wannabe’s on LinkedIn, I have to laugh at their setting to be fully depending on AI (its fun as AI does not exist at present). 

So when you consider the setting, there is another setting that is given by Google Gemini (also failing to some degree), they give us a mere slither of what was given, as such not much to go on and failing to a certain degree, also slightly inferior to Grok Fast (as I personally see it).

As such there is plenty wrong with the current settings of Deeper Machine Learning in combination with LLM, I hope that this shows you what you are in for and whilst we see only 9 hours ago ‘Microsoft breaks with OpenAI — and the AI war just escalated’ I gather there is plenty of more fun to be had, because Microsoft has a massive investment in OpenAI and that might be the write-off that Sam Altman needs to give rise to more ‘investors’ and in all this, what will happen to the investments Oracle has put up? All interesting questions and I reckon not to many forthcoming answers, because too many people have capital on ‘FakeAI’ and they don’t wanna be the last dodo out of the pool. 

Have a great day.

Leave a comment

Filed under IT, Media, Science

The ulterior money maker

That is the setting, but what is true and what is ‘planned’ is another matter. We have several settings, but let me start by giving you two parts before I start ‘presuming’ stuff, so you will be able to keep up. /The first one was the one I got last, but it matters. SBS (at https://www.sbs.com.au/news/article/trumps-america-wants-access-to-australian-biometric-data/ftomgcy5j) gives us ‘Australians’ personal data could soon be accessible by US agencies. Here’s why’ and we are given “Now, reports are emerging that the Australian government may be compelled to share Australians’ biometric data and other information with the US and its agencies, including ICE, as part of a compliance measure to vet travelers entering the country under its Visa Waiver Program (VWP). The Australian government, via the Department of Home Affairs, has so far declined to confirm whether it is currently complying with the demands or has plans to negotiate a data-sharing agreement. That’s despite the US setting a deadline of 31 December for finalising agreements with countries participating in its visa-free travel arrangement, including Australia.” This was nothing new to me, but as it is ‘now’ officially recognised, it adheres to a different field as well. We are further given “The proposed changes to the US’ vetting processes would primarily affect Australians eligible for the ESTA visa waiver program, which allows travelers from 42 countries to visit the US for up to 90 days visa-free, provided they first obtain an electronic travel authorisation.” I personally do not think it will end there, but it is the start that the United States desire, because if the first hurdle is passed, the rest becomes easy and it connects to the second article, even though you might not think that it does. The second article comes from Forbes (at https://www.forbes.com/sites/kateoflahertyuk/2026/02/09/the-new-chatgpt-caricature-trend-comes-with-a-privacy-warning/) with the setting of ‘The New ChatGPT Caricature Trend Comes With A Privacy Warning’ where we see “The ChatGPT caricatures are created by entering a seemingly benign prompt into the AI tool: “Create a caricature of me and my job based on everything you know about me.” The AI caricatures are pretty cool, so it’s easy to see why people are jumping on this viral trend. But to create the caricatures, ChatGPT needs a lot of data about you.” With the added “It means you are handing over a bunch of potentially sensitive data to ChatGPT — all to jump on a viral trend that will soon be forgotten. But that data could potentially be out there forever, at least on the social media platforms you post it on.” 

Source: Forbes

Now consider the new setting and this becomes laughably easy with the 700 platforms being added this year (source: Cleanview) they told us “the United States leads global data center growth with 577+ operating data centers and over 660+ planned or under-construction projects” that is the setting and I have warned people for this setting for over 30 years. Matching and adding data has been possible since the 80’s, but for the longest time we just never had the data technology (like massive hard drives) now we get suppliers like Kioxia with 245TB drives, with 1 petabyte in a few years. But for now you could use 4 of those bad boys and you are already there. Now to the larger setting. Do you think that the USA needs that much data in data centres to regulate the weather? 

It comes to the stage where the Dutch journalist Luc Sala is proven correct. We are headed towards a setting of the “have’s” and the “have not’s” (1988/1989) the market is already there now, the rest is trying to catch up. So we get a world the separates the enablers from the consumers and when we get that, we merely need to define the cut off point of the consumers. This is the world where those who do not consume enough become a liability to that system. He predicted it and now we see the execution towards that point and weirdly enough you are all helping the United States complete that setting, in one hand the government enabling the biometrics collection and in the pother hand the people trying to appease its ‘fanbase’ by handing over whatever they need towards ChatGTP to look cool and no-one considered that these two parts could be combined? This was relatively simple in 1992, now with an evolved Oracle and Snowflake it becomes mere Childs play and the data centres to capture the essence of 8,000,000,000 people is already out there. So where will you end up getting selected under? Because in this setting you do not get to have a choice. It is what governments and their spreadsheets and revenue driving numbers say you are to be. It is basically that simple.

So whilst you think you are doing the fool thing, others can salvage a lot more data out of that setting than places like ChatGPT can vouch for and remember, the Cloud Act 2018 we are told “to improve procedures for both foreign and US investigators to obtain access to vital electronic information held by service providers.” And in this case, anything that helps the US investigators is valid for capture and whatever that is is not precisely defined and whilst we think we are safe, we really are not and every ‘cool’ AI (merely NIP) is based on getting as much data as they can whilst giving you the option to look cool and there is nothing uncool about a caricature of yourself.  The fact that hundreds of these are floating around LinkedIn is reason enough to see that and when the second stage starts (basically American companies selectively poaching) and that is when governments finally realise that they all fell for the trap that was there next to phishing and data transfers and they let it all happen. 

So when you see the SBS article, fear the setting that they give “As well as extensive biometric data, including DNA, the proposal requests that inbound travelers to the US provide five years of social media history, five years of personal and work contact details, extensive personal information on family members, and even the IP address and metadata of any photos uploaded as part of their application. So far, the United Kingdom has signed onto the agreement, and the European Union is in negotiations.” Do you really think that this is needed to keep the United States safe, or is there more in play? The fact that the UK signed it is as I see it stupid beyond believe and this comes from the nation that seemingly holds ‘freedom of speech’ in such high regards.

Have a great day today, because as I see it, some governments are selling you out as you speak.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

Repetition to be

This is what happens, I was rereading my last article (read: blog) and I noticed a few things. I stand by my word, but it could have been said more clearly and as I saw another piece of evidence, I thought it was important to add this to the ‘current’ (as in previous) article. I like clarity although plenty of people have an issue with the ways I write and it should be said that I don’t write for the masses. It just isn’t me and I am not here to win hearts, I leave that to the George Clooneys out there. 

There is still a abundance of speculation, although I have been in IT for over half a century, as such I can rely on presumption. And as the events are coming to pass, we are seeing elements. I personally think the Microsoft is not in a good place, although that part is speculative. You see no matter what OpenAI does, it will fail and it is running out of time. No, this setting comes before that. The EU is largely rejecting Microsoft and what they bring. In Germany at present 30,000 employees are switching from Microsoft to solutions like LibreOffice and Open Xchange. Denmark is switching more profound to similar solutions and France is shifting 500,000 workstations to open source software, equally schools and public sources are making equal changes. Then we get Italy who is switching 150,000 PC’s towards open-source platforms, Austria is already making the shift, at present if armed forces have shifted to open-source. The EU in general: Due to GDPR, European regulators have challenged the use of Microsoft cloud services over data transfers to the US. 

So as we see at present what some say will happen when President Trump switches the ‘internet’ to OFF and there is more happening and some presented stages are ahead by a decent amount. This implies that a large amount of up to 450,000,000 accounts are switching (I am assuming here the nearly all Europeans have some sort of Microsoft account). Just as they are deeper into the ‘fake’ AI setting and with the GDPR in place they cannot copy what is not in ‘their’ cloud. It is happening now, so don’t take notice of the doom speakers. 
Microsoft is seemingly doubling down on everything to make these copies happen before they are switched off. I don’t think they will make it, or at best a partial download and that will affect those 770 data centres that are being build (I cannot say how many of them are Microsoft), when the EU and its data falls away, I wonder how many of these centres will be canceled (for the weirdest reasons) and we will see a new complication. You see all these firms who ‘abandoned’ over 150,000 employees will suddenly see that this brain-drain will complicate life a lot more than they are happy with. 
So as Microsoft is now seeing this noose coming towards them (or they are walking towards their noose). What matters is that the timing was off and the bully tactics of President Trump will show them, that they came short of what they needed. If only they had 6 more months (or if the president would have behaved himself) they might have made it, but now as the world awakens that data is currency and they were about to be robbed of everything they had, the US will now need a different path, because when the data viability would be locked to the EU, and the US and most of the US corporations will be pushed in the open and lacking 450,000,000 data bringers a day, their setting for assumed revenue will go basically into the toilet.

Did you never wonder why the USA needed 770 data centres? And they are unlikely to be all Microsoft data centres, but there will be a fair amount. So what happened to that StarGate project? The information that I saw (source: CNBC) was that “10 data centers were being built in Abilene, Texas, with plans to expand to more states and countries, like the United Kingdom, Norway, Japan and the United Arab Emirates.” There is more to this and in light of these Data centers giving whatever they have to the United States, what are the plans now for the UK and Norway? And there are more questions for the UAE, how clear is it that they are handing over their data to the United States (OK, I apologise, they merely get insight into all data that is managed by an American firm, but does that not amount to the same thing) because Oracle, OpenAI and Microsoft are American firms. So I have no idea how Softbank fits into this as it is Japanese. As such, is Stargate LLC still happening? It is stated to be costing 500 billion? So what happened? All questions, but the doom speakers are out there. Even I am getting messages on LinkedIn on how the data goes dark if President Trump throws the switch. Why was I included? By a person I had never heard before. The US is now nervous because the EU will get others (read: Commonwealth nations) to do the same thing and as I see it, there is well over 80% chance that LibreOffice will be the most popular solution in 2026 and everyone is likely to switch. As such Microsoft just gained a lot of data space, but that might be merely my sense of humor. 

As for their “AI” settings, that system that would be doing a lot by “AI” and whilst we were told that “Microsoft is deeply integrating AI across its operations, with CEO Satya Nadella stating that 20%–30% of code in company repositories is generated by AI”, so whilst everyone is rejoicing, we should also consider that we still see (on a daily basis) that email delivery failures (blocked as spam by Outlook/Hotmail) or job application rejections (rejected by automated systems or after interviews) are still the setting of mainstream (not small exceptions) and that is the setting that comes with a dwindling consumer setting and Microsoft is spending a rather large chunk of the $700,000,000,000 that is due in 2026 (not all of it is Microsoft). So what happens when your customers reject you, but the bills are still due? Yup, that noose is coming towards Microsoft nicely. It is apparently a not so nice event, did anyone tell Satya Nadella this? I reckon we will see a much more serious Nadella now that he is going the way of the noose. 

And here the news separates a little as I was given a few hours ago (at https://www.cryptopolitan.com/qatar-taps-microsoft-to-build-ai-systems/) that ‘Qatar taps Microsoft to build AI systems to cater to government services’, as such dies Qatar knows what ‘befalls’ their data? The article gives us “The platform is also expected to help the ministry develop and deploy intelligent AI agents, an automated system capable of handling tasks ranging from processing applications to answering queries, without the lengthy development cycles traditionally associated with government IT projects. The factory will be built on Microsoft’s technology infrastructure and will be designed to integrate easily with existing government systems.” Yet as I see it, America has insight into all this because of the CLOUD Act (2018): 

So at what point is the setting “disclose data (emails, files, etc.)” even if there was a legal reason, the term ‘files’ is seemingly not limited, as such it could be anything and that is a hard pill to swallow. Before we know it it will contain any IP stored and I wrote about that risk (not connected to the cloud act) because of the debt the US had at that point (I think it was merely 25 trillion at that point), The danger that a desperate government will go looking through all that IP out there presented a little too much danger for my senses, so I made a lot of it public domain. I might not end up with anything, but no-one else will get those marbles for their own greedy needs. As I see it, the big-Tech doesn’t really like Public Domain, but that might be merely my gut feeling (which has no relation to any academic setting). Does Qatar know what it is in for? Perhaps they are, and a lot of it is wildly ‘rejected’ by influencers who are trying to ingratiate themselves to whomever (I mostly don’t care) 

The second bit of news which I saw just an hour ago and was published last year (at https://www.xda-developers.com/libreoffice-is-right-about-microsoft/) gives us ‘LibreOffice is right about Microsoft, and it matters more than you think’ here we see (written by Simon Batt)  “I reported on LibreOffice accusing Microsoft’s “artificially complex” Office XML format of being a “lock-in strategy.” The basis of LibreOffice’s argument was that Microsoft’s usage of the XML format deliberately locked people into using Office over open-source software. It also touches upon how Windows 10 is losing support soon, and how people are being corralled into Windows 11 whether they like it or not. However, LibreOffice touches upon an interesting point. While Microsoft is to blame for its practices, the fault also lies with us a little for going along with it. And you know what? They’re totally right.” It is a different setting and it sparked memories I had regarding the war Microsoft had with Netscape in the 90’s. 

Now that the world has LibreOffice it has choices, but because of the actions of the White House no one has a clue how the world will be hit and in what way. We can no longer trust someone telling us that it all will be fine, because that setting is as I see it near impossible. 

So, what will the rest of the world do? When they realise that the US has access to all data in data storage with American companies? I reckon it will upend the US economy to the largest degree and this is just the beginning. The red lights of rejection are glowing in more and more places and none of them are nice. President Trump made sure of that with his tariff threats and now that the settings are coming home to play, it is even more interesting. What will some do? What will the EU do and I reckon that the Middle East are looking for their own solutions, because they are clued in enough to see what is coming their way. It becomes a setting where no one trusts the United States and what they want requires trust, it is no longer there, so Microsoft is as I see it in a bind and it is largely their own fault. For me it is a little more complex, both Snowflake and Oracle are American companies. What happens there? If the US Administration wants to ‘hijack’ that data, the cloud act of 2018 allows them to do that. In how much danger are we really? I am willing to trust both Snowflake and Oracle. It is the US Administration I have little (read: no) faith in at present and that is not going away any day soon.

As such, I hope I am a little more clear now and I added a few more facts to this, so it is as I personally see it a win-win setting (for me at least). So, have a great day today and I will try to be a little more clear next time around.

Leave a comment

Filed under Finance, IT, Law, Media, Politics

Sighting the noose

This is almost a real setting. There is still a abundance of speculation, although I have been in IT for over half a century, as such I can rely on presumption. And as the events are coming to pass, we are seeing elements. I personally think the Microsoft is not in a good place, although that part is speculative. You see no matter what OpenAI does, it will fail and it is running out of time. No, this setting comes before that. The EU is largely rejecting Microsoft and what they bring. In Germany at present 30,000 employees are switching from Microsoft to solutions like LibreOffice and Open Xchange. Denmark is switching more profound to similar solutions. France is shifting 500,000 workstations. To open source software, equally schools and public sources are making equal changes. Italy is switching 150,000 PC’s towards open-source platforms, Austria is already making the shift, at present if armed forces have shifted to open-source. EU (General): Due to GDPR, European regulators have challenged the use of Microsoft cloud services over data transfers to the US. We see at present what happens when President Trump switches the internet to OFF and there is more happening and some are ahead by a decent amount. This implies that the bulk of 450,000,000 accounts are switching. Just as they are deeper into the ‘fake’ AI setting and with the GDPR in place they cannot copy what is not in ‘their’ cloud. It is happening now, so don’t take notice of the doom speakers. Microsoft is doubling down in everything to make these copies happen before they are switched off. I don’t think they will make it, or at best a partial download and that will affect those 770 data centres that are being build, when the EU and its data falls away, I wonder how many of these centres will be canceled (for the weirdest reasons) and will see a new complication. You see all these firms who ‘abandoned’ over 150,000 employees will suddenly see that this braindyain will complicate life a lot more than they are happy with. So as Microsoft is now seeing this nose coming towards them (or they are walking towards their noose). What matters is that the timing was off and the bully tactics of President Trump will show them, that they came short of what they needed. If only they had 6 more months (or if the president would have behaved himself) they might have made it, but now as the world awakens that data is currency and they were about to be robbed of everything they had, the US will now need a different path, because when the data viability would be locked to the EU, and the US and most of the US corporations will be pushed in the open and lacking 450,000,000 data bringers a day, their setting for revenue will go basically into the toilet.

Did you never wonder why the USA needed 770 data centres? So what happened to that StarGate project? Is that still happening? It is stated to be costing 500 billion? So what happened? All questions, but the doom speakers are out there. Even I am getting messages on LinkedIn on how the data goes dark if President Trump throws the switch. Why was I included? By a person I had never heard before. The US is now nervous because the EU will get others (read: Commonwealth nations) to do the same thing and as I see it, there is well over 80% chance that LibreOffice will be the most popular solution in 2026 and everyone is likely to switch. As such Microsoft just gained a lot of data space, but that might be merely my sense of humor. 

As for their “AI” settings, that system that would be doing a lot by “AI” and whilst we were told that “Microsoft is deeply integrating AI across its operations, with CEO Satya Nadella stating that 20%–30% of code in company repositories is generated by AI”, so whilst everyone is rejoicing, we should also consider that we still see (on a daily basis) that email delivery failures (blocked as spam by Outlook/Hotmail) or job application rejections (rejected by automated systems or after interviews) are still the setting of mainstream (not small exceptions) and that is the setting that comes with a dwindling consumer setting and Microsoft is spending a rather large chink of the $700,000,000,000 that is due in 2026. So what happens when your customers reject you, but the bills are still due? Yup, that noose is coming towards Microsoft nicely. It is apparently a not so nice event, did anyone tell Satya Nadella this? I reckon we will see a much more serious Nadella now that he is going the way of the noose. 

But what will the rest of the world do? When they realise that the US has access to all data in data storage with American companies? I reckon it will upend the US economy to the largest degree and I reckon it is just the beginning. The red lights of rejection are glowing in more and more places and none of them are nice. President Trump made sure of that with his tariff threats and now that the settings are coming home to play, it is even more interesting. What will some do? What will the EU do and I reckon that the Middle East are looking for their own solutions, because they are clued in enough to see what is coming their way. It becomes a setting where no one trusts the United States and what they want requires trust, it is no longer there, so Microsoft is in a bind and it is largely their own fault.

Leave a comment

Filed under Finance, IT, Media, Science

One topples the other

That is at times the setting. It is basically defined under ‘the cost of doing business’ and at times companies big and small go under from that overset risk. It is of course due to the pussies overhang nations that they made all this ‘tax deductible’ and as such governments and its citizens  pay the price in the end. So as we see seeking Alpha giving us ‘Microsoft: An OpenAI Problem’ (at https://seekingalpha.com/article/4867091-microsoft-an-openai-problem-rating-upgrade) a few settings with in the first place “First, given that 45% of RPO comes from OpenAI, MSFT stock is now a beta around the pessimism that surrounds this startup, especially in the last week”, as well as “the market is throwing the baby out with the bathwater. Microsoft is part of the software infrastructure industry, which is dragging down tech” which all seems to make sense, but in that same setting what does set the matter separate is “I don’t think Microsoft will write down its RPO due to OpenAI not being able to pay in the future, but I’m mindful shares could remain under pressure in the near term” and here I am considering the larger stage of “due to OpenAI not being able to pay in the future”. A setting that too many are overlooking. The ‘AI’ baby of all greed driven entities are not looking at what is holding up this figment value. It lost against Google’s Gemini and I understand and I also herald the setting that a lost battle is not a lost war, but too many are ignoring this fact because they are seemingly going all in and bad news is seemingly being filtered away. And in the second we see Seeking Alpha giving us “I think Microsoft has two main problems right now. One of them is called OpenAI (OPENAI). The sentiment around Sam Altman’s firm is anything but positive, and in this piece, I will discuss the key issue that is pressuring the most important startup in the world. The other factor is the selloff in software. Microsoft is part of the software infrastructure industry, and the risk-off move among investors is way too strong.” And why do I think that?

Because these vultures are feeding Oracle to the wolf wannabe’s and to the turmoil of the greedy driven capitalist waves of whatever floats their boat, whilst Oracle is the one stage that is the most  stable at present. Now that the game is close to up for some, now we see that Microsoft is having a problem all whilst no one is clearly digging into the settings of OpenAI as well as the settings that processors and even energy cycles should be having. These facts are casually thrown aside and there is something massively wrong with the stage we see here.

And as we are given (by Seeking Alpha) that “Aside from one point. RPO was up 110%, totaling over half a trillion dollars ($625B to be precise). While any company would have jumped double digits following this announcement, the fact that 45% of that RPO is attributed to OpenAI makes the quality of the backlog questionable (in my modest view)” because what ROI is OpenAI actually giving its shareholders? Where is the profit? It is not there and it will not be there for at least 5 years (a number voiced by some). As such the equation doesn’t seem to hold, but the investors went all in on this and they are playing some kind of poker (where you increase the investment doubling again and again until the pay off comes, I am not into poker) and that is the problem. So what is RPO here? Remaining Performance Obligation or Recovery Point Objective and in the second question setting, we wonder where that the Remaining Performance at the Recovery Point exactly is? You see, at no point in this article we see ROI (Return on Investment) and why not? Is the story that this is 5 years pending too hard to sell?

So, as I see it, it is 2008 al over again but the impact will be much harder, the economy does not have the resilience to go through that again and the US Administration is throwing a dozen sabot’s in that engine, as such the impact will be a lot harder and I spoke of that almost 6 months ago (not sure where) and as we look into this we see no answers and isn’t that weird? The players who are all about ROI and revenue forgoing that setting? So where are Sam Altman, OpenAI and Return on Investment? Even Bloomberg is telling its readers that ‘Microsoft’s Deal With OpenAI Now Viewed as a Risk, Not Reward’, so where are all these Bloomberg wannabe’s? It seems that the stakeholders are filtering out what some need to know right of the bat and that seems not to be coming (at present). In addition to all this Seeking Alpha gives us “The pressure on margins due to the buildout should have been priced in since October 2023! I think it is pretty much mainstream (ask your cab driver next time, for real) that the hike in depreciation is a natural effect of the AI buildout. However, and this is the main risk to being bullish right now, I don’t think the market is willing to recognize this fact. I think the market wants to see a return on the AI data center buildout, and any deterioration in earnings (both revenue growth and margins) is used as an excuse to head for the exit. This remains the largest risk, as Q3 will see a deterioration in Q3 gross margins (per management guidance).” Personally I see that Microsoft should survive this, but to what extent? I want to be clear here, because I have given an anti-Microsoft view before (they deserved this), but here I am out of my depth because I do not have an economic degree. But the people at Seeking Alpha did (a speculative expectation) and the stage of “pressure on margins due to the buildout should have been priced in since October 2023” is something that we haven’t seen, did we? At least I never did (mainly because I do not care) but the people who did, did they see that?

The entire setting smells like yesterday’s diaper (see: Baby Herman) and no one seems to be catching on that something doesn’t feel right. So will the investors claim foul play when they lose their investment? Will the stakeholders be held against the light? All valid questions and I am certain that no answer will follow by anyone who has the valid jurisprudence title and now that the Federal Reserve is no longer hands of Jerome Powell, it will be anyones guess what comes from that corner.

Have a great day today.

Leave a comment

Filed under Finance, IT, Media, Science