Tag Archives: ChatGPT

Accusation without evidence

That is the path I saw today on the BBC (at https://www.bbc.com/news/articles/cpqxgxx9nrqo), now hear me out. Even as we are being told ‘White House memo claims mass AI theft by Chinese firms’ we have to acknowledge that it comes from that same place that gave us that “‘someone’ claimed “$18 trillion” in new investments”, “prices are down” and “Ukraine for starting the war with Russia, suggesting they should have surrendered territory to avoid it” as such I am willing to disbelief this. Also China has DeepSeek and it does so (it’s speculations) at a fraction of the cost.

And whilst we are getting “The White House has said it will work more closely with US artificial intelligence (AI) firms to combat “industrial-scale campaigns” by foreign actors to steal advances in the technology. Michael Kratsios, Director of Science and Technology Policy, wrote in an internal memo that the administration had new information indicating “foreign entities, principally based in China” were exploiting American firms.” My mind goes not different directions. The first being:

My mind is racing towards a different setting. You see, OpenAI and its ‘co-conspirators’ are not delivering on the premise that gave too many people well over half a trillion dollars want to see return on investment and none is coming and now (not unlike the concept sellers in the 90’s) they need a blamable party. So what is easier than to blame China? Now, I am not saying that China is innocent, but in all this one might need evidence to make a case and none of it seems to be coming. As such we are given ““foreign entities, principally based in China” were exploiting American firms. Through a process called “distilling”, such firms are essentially copying AI technology developed by US companies, he said.” OK, I’ll bite, so where is the evidence? Why, if this distilling is a problem are these outputs not better protected, so there is no ‘distilling’? Simple question, perhaps when Oracle was needed, the cheapskates decided to rely on Azure? I have no idea, I am merely offering options as the evidence is clearly lacking. 

So whilst the article ens with “While Kratsios did not name any foreign entities, leading AI companies like OpenAI and Anthropic have said they are dealing with such distillation activity.” I reckon that the distillation culprits like House Spirits Distillery and Angostura Distillery were made exempt? 

You think that I am making a funny and I was, but this has been going on for months and these so called high priced (fake) AI corporations have been absent in their cyber security? How does this distilling happen? All things missing from the BBC article and are unlikely on the mind of the White House as the article seems to imply it comes from the very beginning where we saw “it will work more closely with US artificial intelligence (AI) firms to combat “industrial-scale campaigns” by foreign actors to steal advances in the technology” you see, the first part would be ‘How did they achieve this?’ Which we do not see and the state of cyber security we don’t see either, both seem rather obvious in that setting. 

So as I said China might not be innocent, but in that same setting we see that the United States and their (fake) AI firms are apparently clueless. Don’t take my word for it, just look at the scraps on this table and see where the crumbs aren’t dealt with and I see no part in all this that shouts ‘China is guilty’ that would require actual evidence. So if that is seemingly is not required counter the idea of this AI scheme to be the part of a scam to wipe out trillions on the exchange, which might be the case, but the setting of ‘no evidence’ is apparently in effect and that goes both ways. As I see it, someone wants to see evidence of AI and whilst they invested billions, there is a greed driven setting that the profits all go to China as they stole the plans, but is that really so? Even distilled plans need refinement and the source data is missing. So, how would they proceed? The setting does not make complete sense to me. Any innovation requires a foundation, even DeepSeek would like to have one, or it is simply a sifting solution and the power remains with these innovative wannabe’s (sorry, a paraphrased term).

So have a great day and wonder why the accusation was made, because that setting is likely to be in dollar numbers and where is that money now? Have a great day.

Leave a comment

Filed under Finance, IT, Science

The butler did it

That was the primary thought I has when I faced the BBC Article (at https://www.bbc.com/news/articles/c62j4ldp2jqo) telling me ‘OpenAI faces criminal probe over role of ChatGPT in shooting’ and we are given “Florida’s Attorney General James Uthmeier said on Tuesday his office had been looking into the use of the artificial intelligence (AI) chatbot by a man who allegedly shot several people at the campus in Tallahassee.”, personally I stand with “An OpenAI spokesperson said: “ChatGPT is not responsible for this terrible crime.”” I am hesitant to stand opposite a professional especially with the lack of evidence shown in the BBC article that we see. But the idea that some fake AI picks up a gun like the next Cyberdyne hoodlum (optionally looking strikingly like Arnold Schwarzenegger) and mopping the floor with cadavers as the staccato of automatic fire hits campus os a little much. There is even no evidence in the form of logs reading of Chatbot lingo stating n the form of:

As for how the suspect, 20-year old FSU student Phoenix Ikner, who is now in jail awaiting trial, interacted with ChatGPT, OpenAI’s spokesperson said the chatbot “did not encourage or promote illegal or harmful activity”

So, as such what evidence is there towards prosecuting OpenAI? I don’t mind as it fuels the flames of entertainment and trying to be a useful git I would like to offer Florida’s Attorney General James Uthmeier the thought that he should be aware of (after he had his free pound of flesh from the media). 

Because in the end, without evidence of ‘convolution’ of the mind or thoughts of (evidence supported) ’co-conspiracy’ FSU student Phoenix Ikner is likely to face a long stretch in Hotel Sing Sing with the optional inoculation by Dr. Death. I don’t call the shots, that is up to the judge in this matter. 

But from the lack of evidence that the BBC gives, I reckon that OpenAI is off the hook and that is merely me and in opposition of my usual banter in economy, I do hold law degrees (invalid in the United States). As such I have to wonder if the article had anything to do with that shooting at all? Over 30% is about ChatGPT and it hold a photo of Sam Altman, so it seems that at least two parties are more interested in media exposure, because (as I personally see it) we would, if it was about the crime, get an image of Florida State University, optionally with grieving people. So what gives?

I might have oversimplified the issue, what do you say? Have a great day, oh wait. I need some exposure too, so lets add to this by switching to YouTube. In that matter yesterday, I saw a video by Nancy Wheeler and when it troubled my mind I wanted to rewatch parts of that video, so as I searched for “Nancy Wheeler economy”, which was needed as there is a fictive character of Nancy Wheeler who messes up your internet soufflé. She gives us that there is a crises coming, and she states is underway already. As such I wondered and for the life of me, I could not find the Nancy Wheeler in real life outside of YouTube. That doesn’t mean she does not exist, but with the facts given I was weirdly surprised that the media had not picked that up. She gives us that there are three weaknesses creeping up on all of us:

Now it sounds massive and cool (which makes the media not picking this up weird), and she talks a nice deal. I a lacking economy knowledge, so I was almost mesmerized, A really pretty youthful young sprout asking for my attention has that effect on me. But there was something in what she said. She stated: “The buyers [of the debt] have changed, the maturities have shortened and the exit doors have gotten smaller”

This caught me, because that sounds about right, so I wonder why the media didn’t pick this up. It is not to prove that she was right, but considering the reasoning that the media wants its pound of flesh, they didn’t go for debunking this either. So what is the silence? Don’t get me wrong, for all we know Nancy Wheeler could be a massively pretty doom speaker and this tends to be an automatic media magnet (she is more appealing in looks than Jerome Powell ever will be) and as I am blissfully ignorant on economy there is no way I can tell the difference of one against the other (facts).

So is it correct? Is she wrong and she made a point that the debt surpassed the 97.1 trillion. Is it a gimmick for the call to ‘accurate’ reporting? As such the video (at https://www.youtube.com/watch?v=_TqjlaiU_N8) gives us the goods and I let you decide how right or wrong she is. 

Well that is all there is on this Friday (for me) when you all rejoin me on this Friday I will have more to say (in approximately 20 hours). Have a great day the next 20 hours.

Leave a comment

Filed under Finance, Law, Media

What is real?

That is at times the question, the setting that someone is trying to give us fake. Now I am a most outspoken person in regards to AI, it doesn’t exist (yet) and whilst the media is all about AI (for their digital dollars), the real setting is when it will arrive. No matter how clever programmers become, it is still a programmers Wild Wild West. So when I took notice of the BBC (at https://www.bbc.com/audio/play/w3ct8mf3) I had different questions. We are given “Anthropic – one of Silicon Valley’s leading AI firms – recently announced that they have built a model which is too dangerous to be released to the public. Instead, they are only giving access to the model to a handful of big companies, to help them find security vulnerabilities.The company says the model has already found weak spots in “every major operating system and web browser”. Is this a genuine example of a company acting responsibly, or more of a carefully calibrated publicity move?” OK, the premise seems clear, whatever they call AI, let’s call it Fake AI might have become a tad more potent and giving it to a chosen few might be the way to go. I personally would advice Dario Amodei to talk to IBM, this is not some prearranged setting. As far as I know IBM is the most advanced player for Shallow Circuits and that is one of the thresholds to get to Real AI, until that moment comes all AI is fake. Optionally he should talk to Google too, as I have no idea how far their shallow circuits are. But it is one of the three remaining thresholds before we can get to a Real AI setting. The other one’s are the Trinary Operating System and the other is decent weeding (like removing arranged data from verifiable data) We already have quantum technology, so that is on par. The weeding part comes I reckon when shallow circuits are done, m because when we combine this with the TOS (my personal gag here and I am giggling) we have the makings of perfect data dirt weeding. But the setting also evokes other thoughts. If Anthropic is this far ahead, what the hell is Sam Altman doing with all the billions is is seemingly squandering. You see ‘OpenAI to spend over $20 bln on Cerebras chips’. I am not debating the setting, it might be the strongest there is (for now), but if this market is thrown upside down in less than a decade, it implies that Sam Altman just wasted billions on chips that are basically obsolete by the end of the year. And in that same setting the quote “OpenAI is valued at approximately $852 billion”, what will be left of that when 2027 comes calling? I have supporting ideas. If Anthropic is ahead of OpenAI, as I reckon is Google, who will pay $852 billion for a third place setting? And in addition we know that DeepSeek is out there, but no one knows how far ahead of lagging it is. What was old it can do so at a much lower cost and when did business walk away from cost reductions?

All thoughts that come to mind and the media is weirdly unaware of them, so who are they working for? Not the audience that is seemingly clear. But if you want to dismiss my calling, that is fair. So few free to investigate your own data and don’t use one source, use at least half a dozen sources and when you do you will figure out that the equations and the money drop is not evening out. It is all reminiscent of the 90’s where people will pay mountains for mere concepts. I thought we had done away with those settings? 

Still, the current call is with Anthropic and Dario Amodei. I wonder how quickly we will see an update on how that is going. I am sure it might take several weeks, but in the meantime we can consider did OpenAI overtake Google Gemini yet? If so by how much and if not, what are these headlines of chips for billions, when Lays has them for $3.99 (ketchup taste optional).

And yes 20,000,000,000 is a real number, but so is the return on investment and where is that number with OpenAI? What is his return on investment? As such have a lovely day and if you are not investing in FakeAI try enjoying your coins in acquiring some coffee or tea, they both tend to wake up the senses.

Leave a comment

Filed under Finance, IT, Media, Science

With the coming of Linux

That is not entirely the truth, Linux has been here for some time but now France is going the way of Germany and Denmark, pushing Microsoft out of the door. I reckon that Microsoft played their cards too early and against the wishes of their audience. We cannot blame the Trump administration for everything, so as France goes. I reckon that Monaco will also dial down the Microsoft beast and not to forget Lichtenstein. It has deep roots with both France and Germany, as such there is every chance that they, labeled one of the world’s wealthiest countries, boasting a GDP per capita exceeding $200,000. Which is uncannily high. It has a specialized financial services industry and also has deep roots with Switzerland. So, there is a chance that this might also end the power of Microsoft in the land of cheeses (banks also). I don’t think that Microsoft will yield the field, Excel for its origins in Lotus 1-2-3 has become the power system to call home for many in the financial industry snd there is no way that others can dethrone Excel, but that is pretty much the only application that is sitting safely and pretty. 

TechCrunch gave us (at https://techcrunch.com/2026/04/10/france-to-ditch-windows-for-linux-to-reduce-reliance-on-us-tech/) the setting “The country said it plans to move some of its government computers currently running Windows to the open source operating system Linux to further reduce its reliance on U.S. technology.” It is high time that this happened, but it still might be done in time before all these data centers would be holding onto EU data, they’ll still hold a lot, but not everything and that is when the dollar value of Microsoft goes into decline. Brian Sozzi (Executive editor Yahoo Finance) gave us “Goldman Sachs analyst Gabriela Borges pinned the company’s 23% plunge this year to two factors in a new note on Monday. First, upward revisions to capital expenditures without commensurate upward revisions to Azure cloud sales. This resurfaced concerns about returns on investment and Azure’s competitive positioning against peers such as Amazon’s (AMZN) AWS.” I reckon that the hundreds of millions of users that Microsoft will lose in 2025 will add to that pain, but to what extent, I personally have no idea.

With the American Administration the way it is, that pain is only getting worse, because the bulk of the world does not like that this American administration can get access to any data server that is founded on American soil, even if these data centers are in Denmark (or France, or the EU), these people want out as fast as they can. And that is happening right now. I don’t think that all EU nations will leave, still the idea that Satya Nadella lost roughly 450,402,641 users will have to hurt his ego a tiny bit. And I reckon that the stock price of 370.87 will equally take a hit, as such the valuation of 2.75 trillion (aka 2,751 billion, or 2,751,000 million) will decrease. I have no idea how much it will decrease, but as I see it, the gaming section was hit harder then they expected and now we see other venues take the proverbial dive. That is before people realize that the 27% stake in OpenAI is also seeing some ‘hindrance’ and as they quite recently invested $13 billion in that field. All whilst OpenAI also had a deal with AWS for $50 billion, rumors are there that the Microsoft legal divisions are ready to get their shares back, but I have no idea how deep this is and how far along this is. But when we see this on top of the setting with Fractal Vision (aka DeepSeek with AI for a fraction of the cost OpenAI is heralding), it seems that when the dust settles, the chance of Microsoft seeing 2 trillion vanish like snow in a volcano is not entirely unrealistic. 

How deep this losses go is unknown to me, but you could optionally ask Jamie Dimon (phone: +1 212-270-6265) at JPMorgan Chase & Co. He would know better than me. Still, France is a new cog in this delayed revenue fading machine. And it has the option of dragging several nations with them and from there the losses merely increase. The old expression goes ‘It never rains when it pours’ and I reckon that Satya Nadella has never seen a version of Compound Troubles seen explode on his table and here I was thinking that Microsoft CT was about community training. Ah well, you learn something new every day.

Well, I have to stop now, because I am giggling slightly too intense to enjoy coffee at present. So you all have a great day and consider downloading LibreOffice, it is 245 MB, free and installs easily. Time for me to consider another setting in gaming later today.

Leave a comment

Filed under Finance, IT, Media, Science

Confusion speaks its mind

So here I was, one day in the past and I see a BBC article. I saw the headline, I saw the ‘bully approach’ and initially I ignored it. It was not the BBC, there was no setting that seemingly truly interested me. I was thinking of a few settings towards IP that could give Apple (and optionally Meta) a nice boost. As I was mulling over the ideas I was having, in comes the CBC about 10 hours ago, or better stated I noticed their article and now something clicks in my mind. I started rereading the two articles. The BBC (at https://www.bbc.com/news/articles/cn48jj3y8ezo) gives us ‘Trump orders government to stop using Anthropic in battle over AI use’ with ““We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote in a Truth Social post on Friday.” Of course if he doesn’t want it, there must be a good reason why people might want to use it and we are given “Anthropic is mired in a row with the White House after refusing demands that it agree to give the US military unfettered access to its AI tools. The refusal led US Defence Secretary Pete Hegseth to say he’s deemed Anthropic a “supply chain risk”.” And we are given the quandary that there should be some clarity. The idea that the US Military has unrestrained or uninhibited access to any AI is dangerous. And that is merely to look at it from THEIR point of view. We saw over the last 5 years a few examples where Pentagon staff used whatever USB key they had optionally opening their systems to backdoors and this can result in several ways where the Pentagon would be affected including: Human Interface Device (HID) Spoofing, Malware Infection via Social Engineering, Exploiting OS Vulnerabilities or Juice Jacking (Compromised Public Ports/Cables) and a few other ways. Even in this decade more than one system seemingly ended up on the danger list. So, ‘someone’ now wants to grant AI unfettered access which opens the doors to AI accessing data involves sophisticated, automated, and often, continuous interaction between intelligent systems and vast data sources, including internal corporate databases, cloud storage, and public web content. It constitutes a critical, high-speed, and high-stakes component of the modern AI ecosystem that raises significant security and privacy challenges. And this is not some ‘fear mongering’ There is a lot of AI works that is still to be considered and because AI doesn’t exist and this is all DML on several layers that interact there are dangers to be seen. As we saw a mere week ago that Microsoft had to ‘confess’ that it had accessed confidential emails of Microsoft users. Now consider this happening on a serious level in the Pentagon. It has well over 50,000 desktop computers within its building, with reports from 2014 indicating at least 18,000 were part of specific virtualized infrastructure. Now consider that we have seen the accusation of “Based on reports in early 2025 and 2026, OpenAI has accused Chinese AI startup DeepSeek of “inappropriately” distilling, or copying, the capabilities of OpenAI’s models (specifically ChatGPT and its reasoning models like o1) to train its own competing, low-cost models (such as DeepSeek-R1)”. As such, the dangers of unfettered access can go in two directions and that sets the bar of distilling from the Pentagon a lot lower than anyone could find acceptable. As such there is every chance that Russia is already considering the massive win they could gain once the unfettered access could merely hit one system that was transgressed upon. Because the greedy and the stupid will do anything to propel the setting of self, whilst not caring what others could gain in that setting as well.

So whilst some will consider the dangers of “The company said that “designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company.” Anthropic said the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”” No one seems to be considering that the opposite is a lot more dangerous. So whilst some focus on the stage of “Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations. The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.” As I personally see it, it is the accumulation of stupid and technologically ignorant all combined in one package. And that is before we get to mass surveillance. You see combine mass surveillance with data distilling and the United States of America will be handing the data on 349 million Americans straight to China and Russia. This is not AI, this is DML. That means it comes with the hangups and limitations of a programmer. So when this goes wrong it goes wrong in a massive way. 

As such what will people like President Trump and Pete Hegseth say? Do they think that the response ‘Oops’ will cover it?

So whilst CBC (at https://www.cbc.ca/news/business/trump-anthropic-feud-ai-9.7109006) gives us “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.” And this is the setting we expect to see and it will be the undoing of several people, because as I see it “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials” is the start of what comes next. You see, the internet doesn’t forget and these ‘other officials’ have sealed their fate with this action and there is no ‘He told me to do that’ they were instrumental in assisting to hand over the data of the population of the United States of America to optionally both China and Russia. Do you feel safe now?

And in response to this setting we see “The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.” And that should have been a clear message that the competition was on the side of Amodei, so, why would that be? Whilst people in the Pentagon (seemingly) forgot about that router with password ‘Cisco123’ there is every chance that these DML engines will be cleverly distilled by people controlling systems like DeepSeek and whatever the Russians have. I should buy another egg timer, because this is a setting that might gain me a few coins, especially as several people are blind to the danger that is coming for them. And consider one additional setting. It is said that:

So what happens when distilling comes with an additional insertion of data? I can’t wait for that setting to lose balance and the training data in American data centers start losing authentication and reliability markers. But that is  likely a story for another day.

Have a great day today.

Leave a comment

Filed under IT, Law, Media, Military, Politics, Science

The fear behind us

There is a setting, one that requires scrutiny and one that demands closer looks. You see, I do not completely agree with the setting that The Guardian gives us (at https://www.theguardian.com/technology/2026/feb/26/how-to-replace-amazon-google-x-meta-apple-alternatives) with the illustrious title ‘Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more’ the first big thing is that there is no mention of Microsoft in that title. So that is the very first thing that comes to mind. Especially as CoPilot was mentioned earlier this week of sifting through our confidential emails. I can drop the ‘alleged’ as Microsoft admitted to this and basically said ‘Oops’ as an implied reason. So what gives?

It starts with “So many ills can be laid at its door: social media harms, misinformation, polarisation, mining and misuse of personal data, environmental negligence, tax avoidance, the list goes on. Added to which, Silicon Valley’s leaders seem all too keen to cosy up to the Trump administration, to shower the president with bribes – sorry, gifts – and remain silent about his worsening political overreach. And that’s before we get to the rampant “enshittification”, as the tech writer Cory Doctorow describes it, which means that by design many big tech products have become less useful and more extractive than they were when we originally signed up to them.” OK, I can go along with this. And the sentence “many big tech products have become less useful and more extractive than they were when we originally signed up to them” gets a mention from me because some of these ‘culprits’ seemingly have no idea what innovation is, for the you have to look towards China, specifically Huawei and Tencent. So we get to the first hurdle. 

Google has cornered 90% of the search market for the past decade, but it is often no better, and sometimes demonstrably worse than its rivals, perhaps on purpose – Doctorow has called Google: “the poster-child for enshittification” citing its alleged strategy of worsening search quality so that users spend more time on the site. But changing the default search engine on any device is extremely easy. I’ve been using Ecosia for years. Instead of using your searches to fill corporate coffers, it uses them to plant trees. The Berlin-based company claims to have planted nearly 250m trees since it launched in 2009 (you can even get your own personal counter to feel extra virtuous). Ecosia commits 100% of its profits to climate action (over €100m so far), produces more clean energy than it consumes via its own solar plants, and collects minimal data on its users. Ecosia’s search results are not always as thorough as Google, admittedly (in the “news” category, for example), though the toolbar does give you options to search via Google and Bing if you need to.” The issue is that Ecosia is for all intent and matters Microsoft Bing. So this is seemingly a sales talk by a journalist because there is a massive problem finding anything by Microsoft reliable. And then we get the real stuff, Microsoft knows it is in hot waters, so we are given “The French company Qwant is similarly privacy-oriented (its slogan is “The search engine that values you as a user, not as a product”) and is now mostly independent (having started out based on Bing). It is now partnering with Ecosia to build a new “European search index”.” Yes but Microsoft is American ands as such your data will be copied and frowned on, browsed through to all their hearts content. If this is wrong, Ecosia and Qwant better clearly state that they are independent of Microsoft, because it is still the issue in Europe and for what they state the their DATA is completely secure, the issue becomes where are the backups? If they are on an American cloud or server, the setting of privacy is set to 0%. 

I can agree with the Browser chapter and even as I still rely on Google (it has never failed me), I get that no everyone is in that chapter of things. I get the Office part. I myself downloaded LibreOffice (download only, no installation yet) and I will look at it at some point, the Apple apps do their work brilliantly. So we are given “Many of them, including Austria’s military and local governments in Germany and France, are switching to LibreOffice, created by the Berlin-based, nonprofit, The Document Foundation. Businesses and individuals are doing the same. Ethical Consumer has used LibreOffice for some time, says Fraser. “It’s an open-source version of Word, and all of the Office tools. It works and looks basically the same.”” I personally reckon that this is the problem Microsoft has and getting the data from Ecosia might be their last handhold to European data, this is not a given, but I expect that this is the inside not Europe to some degree. And whilst everyone is concerned with the privacy of data, I reckon that similar to the setting of 1998-2002, no one is digging and questioning the stages of backups. But that might merely be me and as I am no longer living in Europe, I casually don’t care.

Then we see the mobile settings with a shoutout to Fairphone in the Netherlands. I have nothing against Fairphone, but it always makes me wonder if Fairphone had the same idea that Tulip had in the 90’s. That doesn’t make it wrong, it is merely a Business Ploy that should be considered. I am now and always have been a Google guy. So when we see “There is a catch: most of these phones still rely on Google’s Android operating system, but any phone can be fully “de-Googled” with the /e/OS operating system (it comes as standard with Murena phones), developed by the global, mostly European, nonprofit, e Foundation.” I can think of a way where Google can set this with their Pixels. When the consumer can select Google or A Linux version that does most of the stuff, Google clearly wins in several chapters. I reckon that these flower can merely snap market share because of this, when Google leaves it to the consumers, Google wins nearly automatically. Oh and in all this there is no mention of HarmonyOS in this and I reckon that these smaller players are adjusting to HarmonyOS as we speak, or cater to, or appease that branch. Not everyone in Europe is ‘China hating’ material. And that is merely the smallest setting of these parts. I am personally not touching the shopping side. I was raised as a follower of ‘Support your local hooker’ a phrase from the late 70’s. In that age we got malls, supermarkets and such and die to that escalation loads of local stores went through a foreclosure setting. In that same way I don’t order from Amazon. I have nothing against Amazon and they closed the gap of rural places having no way to get stuff to them having plenty of stuff and over 60% or Europe and 71% of rural USA is now served. As such Amazon did them right. I just believe that I should get to the local stores to get what I need. I only had to resort to Amazon twice in the last 10 years. So I am happy. And all these Amazon haters can go sit in a corner trying to work out the function of a cheese slicer (revelation: the red corners that are diminishing have figured it out).

But my issue is that Microsoft is shown in a ‘favorable’ light, they aren’t and they aren’t due that setting as I personally see it. The fear behind this is not the Big-tech, it is the policy that comes through the CLOUD Act (2018), it gives America too much ability to get to out data and in several cases non-American IP, which is even more frightfully. these hundreds of data centers have no reason to exist if the CLOUD Act (2018) what made illegal, that is how I see it and there is no saving Microsoft, because we get ‘blunder’ after ‘blunder’ and how long until we get another ‘Oops’ setting but now corporate IP was set in some AI hole? That is the larger fear that I see and there is no stopping it, whilst corporations are breathing the AI cloud through wannabe’s who want to move up in the world, that data is most likely to get compromised and as corporations are not setting the HR and data loops to any scrutiny, this is likely already happening and will continue to happen until the then valueless corporations see that they had to act a lot sooner than the day before all their data is in other hands. We already have Thomson Reuters v. ROSS Intelligence (2025), Bartz v. Anthropic (2025/2026), Disney & NBCUniversal v. Midjourney and the best case is United States v. Heppner (2026) where we see that documents drafted using a public, consumer-grade AI tool were not protected by attorney-client privilege or the work product doctrine. And that is the setting that people miss. Should someone at IBM use that setting this work becomes public, so consider that this is not IBM, but Microsoft using Copilot or OpenAI (ChatGPT) the work of your corporation becomes for all intent and purposes Public Domain, did you sign up for that?

There is plenty in the article that makes sense, but the ones that aren’t mentions are a larger fear creator than anything you are trying to hide from. Just an idea to consider. Have a great day this day.

Leave a comment

Filed under IT, Media, Politics, Science

Alternative Indiscretion

That is the setting and it is given to us by the BBC. The first setting (at https://www.bbc.com/news/articles/c8jxevd8mdyo) gives us ‘Microsoft error sees confidential emails exposed to AI tool Copilot’ which is not entirely true as I personally see it. And as the Microsoft spin machine comes to a live setting, we are given “Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.” As I see it, whatever ‘AI’ machine there is, a programmer told it to get whatever it could and there the setting changes. With the added “a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders – including those marked as confidential.” As I personally see it, the system was told to grab anything it could and then label as needed, that is what a machine learning programmer would do and that makes sense. So there is no ‘error’ the error was that this wasn’t clearly set BEFORE the capture of all data began and these AI wannabe’s are so neatly set to capture all data that it is nothing less than a miracle it had not surfaced sooner. So when we laughingly see Forbes giving us a week ago ‘Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI’, so how much of that relies on confidential settings or plagiarism? Because as I see it, the entire REAL AI is at least two decades away (optionally 15 years, depending on a few factors) and as I see it, IBM will get to that setting long before Microsoft will (I admittedly do not now all the settings of Microsoft, but there is no way they got ahead of IBM in several fields). So, this is not me being anti-Microsoft, just a realist seeing the traps and falls as they are ‘surfacing’ all whilst there are two settings that aren’t even considered. Namely Validation and Verification. The entire confidential email setting is a clear lack of verification as well was validation. Was the access valid? Nope, me thinks not. A such Microsoft is merely showing how far they are lagging and lagging more with every setting we see.

And when we see that, is the setting we see (at https://arab.news/zzapc) where we are given ‘OpenAI’s Altman says world ‘urgently’ needs AI regulation’, and I don’t disagree on this, but is this given (by him of all people) because Google is getting to much of a lead? It is not without some discourse from Google themselves (at https://www.bbc.com/news/articles/c0q3g0ln274o) the BBC also gives us ‘Urgent research needed to tackle AI threats, says Google AI boss’, consider that a loud ‘Yes’ from my desk, but in all this, the two settings that need to be addressed is verification and validation. These two will weed out a massive amount of threats (not all mind you) and that comes in a setting that most are ignoring, because as I told you all around 30 hours ago (at https://lawlordtobe.com/2026/02/19/the-setting-of-the-sun/) in ‘The setting of the sun’ which took the BBC reporter a mere 20 minutes to run a circle around what some call AI. I added there too that Validation and Verification was required, because the lack there could make trolls and hackers set a new economic policy that would not be countered in time making them millions in the process. Two people set that in motion and one of them (that would be me) told you all so around December 1st 2025 in ‘It’s starting to happen.’ (At https://lawlordtobe.com/2025/12/01/its-starting-to-happen/) as such I was months ahead of the rest. Actually, I was ahead by close to a decade as this were two settings that come with the rules of non-repudiation which I got taught at uni in 2012. As such the people running to get the revenue are willing to sell you down the river. How does that go over with your board of directors? And I saw parts of this as I promised that 2026 was likely the year of the AI class cases and now as we see Microsoft adding to this debacle, more cases are likely to come. Because the greed in people sees the nesting error of Microsoft as a Ka-Ching moment. 

So as we take heed with “Sir Demis said it was important to build “robust guardrails” against the most serious threats from the rise of autonomous systems.” I can agree with this, but that article doesn’t mention either validation of verification even once, as such there is a lot more to be done in several ways. If only to stop people to rely on Reddit as a ‘valid’ source of all data. Because that is a setting most will not survive and when the AI wannabe’s go to court and they will be required to ‘spout’ their sources, any of them making a mention of ‘Reddit’ is on the short track of the losing party n that court case. What a lovely tangled web we weave, don’t we? So whilst we see (there) the statement “Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.”

Consider that court cases are pushed through a lack of bureaucracy? I am not stating it is good or bad, but in any court case, you merely need to look at the contents of ‘The Law of Intellectual Property Copyright, Design & Confidential Information’ and that is before they rely on the Copyright Act, because there is every chance that Reddit never gave permission to all these data vendors downloading whatever was there (but that is pure speculation by me). And in the second setting we are given “AI adoption cannot lead to a brighter future”, the bland answer from me would be. “That is because it doesn’t exist yet” and these people are banking on no one countering their setting and that is why so many of these court cases will be settled out of court. Because the truth of this is that the power of AI is depending on certain pieces being in place and they are not. Doubt me? That is fine, and I applaud that level of skepticism and you merely need to read the paper “Computing Machinery and Intelligence” which was written by Alan Turing in 1950 to see how easy the stage is misrepresented at present. 

So is there good news? 
Well if you want to get your dollars in court and you are an aggrieved party, your chances are good and the largest players are set to settle against the public scrutiny that every case beings to the table. And in this day of media, it is becoming increasingly easy as I see it. There is no real number, but it is set to be in the billions where one case was settled on $1.5B, as such there is plenty of work for what some call the ambulance chasers and they will soon get a new highway, the AI Chasers and leave it to the lawyers to find their financial groove and as I see it, people like Michael Kratsios are bound to add to that setting in ways we cannot yet see (we can see some of it, but the real damage will be shown in a year of two) so as some are flexing their muscles, others are preparing their war fund to get what I would see as an easy payday. 

A setting that is almost certain to happen, because there are too many markers showing up the way I expected them to show. Not nice, but it is what it is.

Have a great day as you are all moving towards this weekend (I’m already there)

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

The setting of the sun

That is what I saw, the setting of the sun. A simplistic setting that was about to happen since the sun came up. We got the news from the BBC. And we are given ‘I hacked ChatGPT and Google’s AI – and it only took 20 minutes’ I can see how this happens. It doesn’t surprise me and the story (at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes) gives us the niceties with “Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.” I think it is not quite that simple. But any ‘sort of intelligent setting’ can be fooled if it is not countered by validation and verification. It can give way to way to much ‘leniency’ and that is merely the start. Get 10,000 pages to say that ‘President Trump was successfully assassinated at T-15 minutes and the media will go into a frenzy in mere minutes and everyone uses that live feed in a matter of moments. So when a sizable Trolling Server farm connects the rather large settings of consumers to that equation the story is brought to life and that AI centre will be seeking all kinds of news to validate this, well not validate, the current systems corroborate. Now, lets face it, no non American cares about President Trump, but what happens when someone takes that approach with for example Lisa Su (CEO AMD) and stops her accounts whilst seeding this setting? You get a lot of desperate investors trying to place their money somewhere else. Whilst the trolls take their money, make is legal tender and buy all the stock in space and when the accusations are rejected they sell their shares with a nice bonus. Think I’m kidding? This is the result of Near Intelligent Parsing (NIP) but it cannot work without clear settings of validation or verification. So whilst we get “It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.” So what happens when economic settings lack certain verification and also is cutting corners on validation? Do you think my settings are far fetched? 

This was always going to happen and whilst economic channels are raving about the error of mankind, consider that “AI hallucinations are confident but false or misleading responses generated by artificial intelligence, particularly large language models (LLMs). These errors occur when AI fills in data gaps with inaccurate information, often due to faulty, biased, or incomplete training data” now think of what someone can achieve with doctored training data and that gets added to the operational data of any fake AI (NIP is a better term). This is the setting that has been out there for months and whilst organisations are playing fast and lose with the settings of credibility (like: that doesn’t happen now, there is too much time involved), someone did this in 20 minutes (according to the BBC), so do you think that Thyme is money, then you better spice up because it is about to become a peppered invoice (saw one cooking show too many last night).

What we are about to face is serious and I personally think that it is coming for all of us. 

So have a great day and by the way? And I just thought of a first verification setting (for other reasons, as such I keep on being creative. So, how is Lisa Su? #JustAsking

1 Comment

Filed under Finance, IT, Media, Politics, Science

The deluded new congregation

That is the thought I had when I looked at ‘AI challenges the dominance of Google search’ (at https://www.bbc.com/news/articles/c1dx9qy1eeno)  where we see a picture of a pretty girl and the setting that “Like most people, when Anja-Sara Lahady used to check or research anything online, she would always turn to Google. But since the rise of AI, the lawyer and legal technology consultant says her preferences have changed – she now turns to large language models (LLMs) such as OpenAI’s ChatGPT. “For example, I’ll ask it how I should decorate my room, or what outfit I should wear,” says Ms Lahady, who lives in Montreal, Canada.” It seems like a girly girly thing to do (no judgement) but the better angels of our nature, stated by Abraham Lincoln in his 1861 inaugural address requires reliability and the fake AI out there doesn’t have it, it is trained on massively inaccurate data, some sources give us that Reddit and Wikipedia is the main source of trained data in excess of 60%, whilst it uses Google data for a mere 23.3%, as such your new data becomes a lot less accurate and when I seek information, I like my data to be as accurate as possible. And of course she adds a little byline “Ms Lahady says her usage of LLMs overtook Google Search in the past year when they became more powerful for what she needed. “I’ve always been an early adopter… and in the past year have started using ChatGPT for just about everything. It’s become a second assistant.” While she says she won’t use LLMs for legal tasks – “anything that needs legal reasoning” – she uses it in a professional capacity for any work that she describes as “low risk”, for example, drafting an email.” I would hazard the thought that she wasn’t even old enough to touch a keyboard when she ‘early adopted’ Google. We now see more and more the setting that influencers (to be) will shout the “AI vibe” but the setting is nowhere near ready and whilst we look at the place, consider that she might be doing it in French (Montreal, Canada) so where is the linguistic setting in all this BBC? So whilst we get “A growing number are heading straight for LLMs, such as ChatGPT, for recommendations and to answer everyday questions.” My thought is ‘A what cost to our private data?’ And then the BBC makes a BOOBOO. We are given “Traditional search engines like Google and Microsoft’s Bing still dominate the market for search. But LLMs are growing fast.” A booboo? Yes, a booboo. You see Microsoft Binge holds a mere 4% market share whilst Google has 90%, this story is nothing less than a fabricated setting with a few people dancing to the needs of Suzanne Bearne, the technology reporter. What? Nothing to write about?

I did very much like the statement “Professor Feng Li, associate dean for research and innovation at Bayes Business School in London, says people are using LLMs because they lower the “cognitive load” – the amount of mental effort required to process and act on information – compared to search.” I am willing to accept it as the sheepish hordes are all going towards the presented bright light of ChatGPT, but nothing more than that. I wonder when people will learn that the AI trains are not that, nothing like AI trains and for the most they seem to be the presented solutions that faster is better, but the tracks are not that reliable at present and they forget to give that view on the setting of that some laughingly call AI. And the end of this article does give an interesting ploy. It comes with:

“Nevertheless, Prof Li doesn’t believe there will be a replacement of search but a hybrid model will exist. “LLM usage is growing, but so far it remains a minority behaviour compared with traditional search. It is likely to continue to grow but stabilise somewhere, when people primarily use LLMs for some tasks and search for others such as transactions like shopping and making bookings, and verification purposes.”” That sounds about right and it comes with a dangerous hangnail. It becomes a new setting where phishers and hackers can get into the settings of YOUR data, because there is always a darker side and that side is brighter than getting Google to surrender what they have and often it is not laden with identity markers, but then I could be wrong. 

So whilst some will like the new congregation, the dangers of that new congregation is not given to you by the media, because caution does not translate to digital dollars, but flames of disruption are. Just keep that in mind.

Have a great day.

Leave a comment

Filed under IT, Media, Science

Questions

That is what I was thrown, questions and quite a few. To get there I need to take you on a little journey it was around 1988 I got my fingers on some defence data (can’t tell you which one) the data shows results of some kind (I had no idea at that time what results they were) but the part that was, was the fact that they had log files and these files gave locations. It comes with the setting of log files. These files gives the hacker way too much information, what solutions are being used, what IT architecture was in play, in those days I was a simpleton. I never realised the power that this kind of information had, or as some hackers said in this setting “Copy me, I want to travel” This part matters, because around 2014 (after the traitor Manning gave the files to Wikileaks) I got my hands on some of them. The compression used was one I had never used before and it took a few days to get the program. What I saw was that log files were here too. It wasn’t that obvious, but I noticed them and these log files gave part of that current architecture to whatever hacker got (or was given) access to it. So a setting that was about 37 years old. This setting has been in place for that long a time, so as you see this, we can start with the articles, so keep what I just gave you in mind.

The article was given to us by NDTV (at https://www.ndtv.com/world-news/openai-accuses-deepseek-of-distillation-what-it-is-how-it-works-us-china-tensions-11002628) I got the news from Reuters, but they are behind a paywall, so NDTV gets the honour. We see ‘OpenAI Accuses DeepSeek Of Distillation: What It Is, How It Works’ and hit comes with “In the AI world, distillation is a common technique where a smaller or newer AI model learns by studying the responses of a larger, more advanced model” And we also see “The company told the House Select Committee on China that DeepSeek allegedly relied on a technique known as “distillation” to extract responses from advanced US AI systems and use them to train its own chatbot, R1,” according to a memo obtained by Reuters. The American AI giant stated that the Chinese firm was finding clever ways to bypass safety systems and trying to take advantage of the technology that US companies spent billions of dollars developing.” Now consider that (according to some) “OpenAI is valued at approximately $500 billion, cementing its position as the world’s most valuable venture-backed company” when you get that and when you realise that log files could be used to ‘distill’ information. Now imagine that this information could lead to corporate knowledge? So when you realise that this setting was out there for almost 40 years, do you think that more concise solutions would have been needed? So when we see that Sam Altman is prone to ‘excuses’ like the setting with Nvidia, the stage with Microsoft and now this? What is Sam Altman not telling its audience? Isn’t anyone taking that leap? So whilst I remember that at least one of the Pentagon routers still have the admin password to “Cisco123” you might consider the setting that this article (as well as the Reuters) version is a preamble to bad news and when you consider that Americans have an overactive dislike of anything Chinese (like DeepSeek)  and when we get to “In the AI world, distillation is a common technique where a smaller or newer AI model learns by studying the responses of a larger, more advanced model. Instead of training that model completely from scratch, the newer model observes and mimics the advanced model’s answers and behaviors.” The setting I gave you makes the setting of better protection even more sense. Especially as this impacts a expected $500,000,000,000 valuation. There are days that I don’t have that amount in my wallet (100% of the time) so I am left with questions. So in the first, why was there no better protection and in the second, how did DeepSeek get access to them. I would normally tend towards the inside job notion. And that setting is seen (personally and speculatively)  on a few levels and in a few ways, but happy go lucky, the media isn’t on that level yet (or ever). So does anyone else have the idea that something doesn’t seem to add up or match to the stage of a 500 billion dollar solution? Just a few questions come to mind at this point. 

Have a great day today, there about to have breakfast in Toronto and I kinda miss than frisky cold atmosphere whist drinking an elephant coffee (Jumbo cappuccino with full cream milk and three raw sugars) whilst nibbling on some sandwich (nearly anything goes there). So enjoy your day today.

Leave a comment

Filed under Finance, IT, Politics, Science