Tag Archives: technology

The Bull what?

I was confronted with an Oracle article this morning, it came with the complements of the Insider Monkey (at https://www.insidermonkey.com/blog/oracles-orcl-backlog-drives-its-bull-thesis-according-to-analysts-1726682/). The article ‘Oracle’s (ORCL) Backlog Drives Its Bull Thesis According To Analysts’ which might be a conundrum, so lets take a look. We are given “The major factors in the firm’s bullish thesis on ORCL are its massive backlog and its ability to cater to increasing AI investments in the US. Oracle has a remaining performance obligation (RPO) of $553 billion, which offers good visibility into the company’s future earnings.” I would go with that a backlog gives stock and future of a company value, but that might be an oversimplification. And $553,000,000,000 is nothing to sneer at. It is seemingly more than the overall business that several nations have and in this case it is more then Norway gets on an annual level. So I would go with that, but what is a bullish thesis? 

Well, in short “A bull thesis is a structured argument supporting the belief that a specific stock, sector, or the overall market will rise in value, driven by positive catalysts like strong earnings, innovation, or economic expansion. It focuses on growth potential, such as AI-driven productivity, high revenue backlogs, or increased market share.” (Source: Simply Wall Street).

So I had it correct the first time over (a few days ago). There was nothing new under the hot sun, but the next bit ‘surprised’ me a bit. It was “The analyst also pointed out that a major risk in the bull thesis is the customer concentration. A large part of this backlog comes from OpenAI. OpenAI intends to invest a total of $600 billion in computing power by 2030. Previously, in October, OpenAI CEO Sam Altman said the company could spend up to $1.4 trillion on infrastructure by 2033. One month ago, BNP Paribas analyst Stefan Slowinski commented on how this particular risk is now reducing for Oracle Corporation (NYSE:ORCL):” So in short, most of the backlog comes from OpenAI, if OpenAI fails (not a weird thought) Oracle stumbles as would be the case, so the backlog is due to mostly one customer and that is a rusk. How big a risk remains to be seen. The people wanting OpenAI to succeed are numerous and ‘THEY’ would be reducing the risk like the metal dealer reducing the risk of riveting and downplaying potential dangers. This went well before the Titanic saw the shores of the ocean (bottom of the sea), but what happens afterwards? Now, riveting is largely supported, there are whole fleets still out there based on riveting. But what happens when the next big thing comes (like welding), so that is where we are right now. But on the horizon we see Google DeepMind, Anthropic, Meta, DeepSeek and something called Cohere. I believe Oracle is in a good space as whatever comes next will require a system that deal with data and I believe that the only competitor here is Snowflake. As such yes, there is a risk to (what some call) the Bull thesis, but the risk is seemingly small as nothing can match Oracle and Snowflake can only partially cover Oracle (as I see it) and I have some reservations on BNP Paribas analyst Stefan Slowinski as BNP Paribas and OpenAI have a multifaceted relationship involving financial analysis, infrastructure, and competition within the AI landscape and this article dos not bare this out. But in that setting we also fail to see the setting that ‘SoftBank Secures $40 Billion Loan to Fund $30 Billion OpenAI Investment’ (source: TradingView) this matters as there is a backlog and they still need loans/investment funds? And the second setting is given to us (at https://www.nssmag.com/en/lifestyle/44761/sora-openai-shutdown) where we see ‘Understanding OpenAI’s U-turn on Sora’ where we see “The development team of Sora, the artificial intelligence software by OpenAI that allowed users to generate realistic videos from a simple prompt, recently announced the shutdown of the app. It is a sudden and highly significant change, one that is expected to produce notable effects in the technology and entertainment sectors, with repercussions that could extend well beyond the U.S. market. The shutdown of Sora is not relevant only for the company led by Sam Altman, but also for other players active in the field of generative AI applied to video production. Google, for instance, now finds itself in an advantageous position in this area, with the concrete possibility of consolidating its leadership in the generation of realistic AI-based videos – thanks to its tool Veo.” So some will see this as a boost to Google (DeepMind) but this happens before these tracks became financially viable (read: paying off) and these elements will create some sort of minor shockwave. The problem is that 3-4 shockwaves can create a massive customer turnover (like towards a competitor) and even if it doesn’t ‘damage’ Oracle, it might hurt prospects in that near future. Consider that this backlog of $553 billion reduces it to a mere $125,000,000,000 Still a large number, but that is when it starts raining men on Wall Street (aka: watch out below).  All elements overlooked in Insider Monkey and the non-Chinese media is not too bitty in the DeepSeek settings. So we are mostly unaware how their next version of its engine is. All elements that will influence the view on Oracle. I still have faith that Oracle will pull through successfully, but these pesky investors are at present more jittery than a room full of roaches as you turn on the lights. It might not be the best setting for a long term ‘understanding’ and that is something Oracle has to deal with. 

Have a great day, I am now 120 minutes from breakfast, although if I was in Vancouver I could enjoy another lunch in the Nightingale like a Cache Creek Beef Tartare, yummy.

Leave a comment

Filed under Finance, IT, Science

Are we really that dim?

I saw an article that the BBC put out last week. I must have missed it, because I tend to look at BBC news each day. So the article (at https://www.bbc.com/news/articles/cqj9kgxqjwjo) is giving us ‘Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers’ and here I am not really that clear why needed whistleblowers. The media has been doing this for the better part of a decade. These morning shows (what they call entertainment) are driven to push the boundaries of engagement. A carefully placed half witted word is all it takes to drive up engagement. And driven to all this is the digital dollar, because these pages also drive advertisement money for all concerned. As such it is to be expected that Meta and others (in this case TikTok) would be on that same horse. So whilst we are given “Social media giants made decisions which allowed more harmful content on people’s feeds, after internal research into their algorithms showed how outrage fuelled engagement, whistleblowers told the BBC. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail and terrorism as they battled for users’ attention.” And this comes with the added “The whistleblowers who spoke to the BBC documentary, Inside the Rage Machine, offer a close-up view of how the industry responded following the explosive growth of TikTok, whose highly engaging algorithm for recommending short videos upended social media, leaving rivals scrambling to catch up. A senior Meta researcher, Matt Motyl, said the company’s competitor to TikTok, Instagram Reels, was launched in 2020 without sufficient safeguards. Internal research shared with the BBC showed comments on Reels had significantly higher prevalence of bullying and harassment, hate speech, and violence or incitement than elsewhere on Instagram.” I am not surprised and it comes with the added concerns that we aren’t being given here. You see, the word “Advertisement” isn’t given once in this article. And advertisement is driving this. Simply because advertisement is money, it is printed money that can be handed over anywhere and the second stage that the advertisement lobby is now quietly becoming a lot bigger than the NRA or the National Association of Realtors (NAR), which spent approximately $63.5 million in the USA, followed closely by the U.S. Chamber of Commerce at over $53 million. The advertisement lobby knows that they need to stand in the shadows (for now), so whilst we might think that the Association of National Advertisers (ANA), which  represents over 1,000 companies and 15,000 brands, focusing on marketing strategies and lobbying against restrictions on advertising, or the American Association of Advertising Agencies (4A’s), they represent advertising agencies, focusing on industry standards and advocacy. And there are a few more. None of them is making any sounds to the setting of these settings, because their pennies are depending on all this and these pennies when multiplied by a few billion become a serious amount of money and that money is coming in every day through engagement and flames. So at what point will we see the deeper story behind all of this?

Because at some point this lobby becomes too large to be unseated and whilst the NRA is in the United States, the advertisement lobby is working on a global setting and no-one is taking that serious. So, whilst some agencies (locally) are vetting for legality, decency, and truthfulness. The moment it crosses borders they become pretty silent.

In this I wonder when the BBC takes up that baton and takes a much harder look at what they are leaving in the dirt. What parts of all this is not being picked up by anyone? 

These are simple question, but the answers might show that there is more to all this and that is seemingly not seen.

Have a great day. 

Leave a comment

Filed under Finance, IT, Media, Science

I am not economical savvy

That is the setting and we can conclude that I am intelligent, but not that economical savvy. I have known for the length of my years that if you spend less then you get, you might get rich at some point. I know it is a little simplistic, but I am not an economist. I know data, I can read, write and comprehend data, almost any data. So when I saw something almost a week ago, I wrote ‘Is it insight or data?’ On March 16th (at https://lawlordtobe.com/2026/03/16/is-it-insight-or-data/) and I stood behind Oracle, not because I am so economical, but because I know technology and Oracle is an essential technology. In some ways it is now chased by Snowflake, but that is the nature of the beast. Oracle might be at the top, but it is forever being chased by whomever wants to get into number one. Snowflake is speeding past all the others, but it will not (for some time) go past Oracle. So when I saw that Oracle had half a trillion in their pipeline, the other news made little sense and I wrote about that and 4 days later (the day before yesterday) we get a fool, a Motley fool no less (at https://www.fool.com/investing/2026/03/20/news-oracle-billion-backlog-ai-stock-buy/) give us ‘Oracle’s $553 Billion Backlog Could Make It the Most Important AI Stock of 2026, But Is It Too Late to Buy?’ Pretty much exactly as I said it was. But they give us more. We also see “It’s worth noting that Oracle stock has lost 49% of its value in the past six months, owing to multiple concerns, including a reliance on OpenAI for a significant share of its contractual backlog and taking on sizable debt to build artificial intelligence (AI) data centers. However, those concerns took a backseat after Oracle’s beat-and-raise quarterly report. Let’s see what worked for Oracle last quarter. Then, let’s take a closer look at its valuation to find out if it’s too late to invest in this AI stock that has the potential to soar impressively for the rest of the year”, with an additional “Oracle’s quarterly revenue jumped 22% year over year to $17.2 billion, exceeding the $16.9 billion Wall Street estimate. The company’s non-GAAP earnings growth of 21% to $1.79 was a bigger surprise, as analysts would have settled for $1.70 per share. The company’s cloud infrastructure business also outperformed expectations, with revenue increasing by 84% year over year to $4.9 billion. That was higher than the $4.74 billion consensus expectation. Even better, Oracle’s cloud infrastructure business is likely to continue growing at a terrific pace in the future. Its remaining performance obligations (RPO) jumped a whopping 325% year over year in the quarter to $553 billion.” Now lets be clear, I get most of that data, but unlike that fool Motley there is a lot I do not see, mainly because I am not an economist. 

And here you might think that there is confusion, because I have (and still) say that AI does not yet exist. But data does exist and when it comes to data Oracle is the Rolls Royce of data systems. So, whatever these people want to make you believe, they can do it better with a good data solution. And all DML (Deeper Machine Language) as well as interactions with LLM (Large Language Models) require the best solution (which gets you to Oracle with optional Snowflake) so whatever data solution these people select, they need to rely on their data ventures and that puts Oracle in the picture and when you comprehend that, the half a trillion dollar pipeline starts making sense. 

What astounds me is that some people like to make some kind of consideration and as I see it, Oracle is a long term investment. You might think it is about the wealth of Larry Ellison and you would be partially right there, he brought Oracle to life (as the saying goes) and whilst some people are in it to play the markets, Oracle is above that. It is the safe place to put your dineros (as the expression goes). 

So why Oracle? As I see it, for over 30 years the people who wanted to get into data emulated and copied what Oracle did and called it innovation, but there is only one Oracle, the rest is almost a joke (OK, Snowflake might be the exception, but it is not as great as Oracle). Some tech firm bought Sybase and flogged it off as THEIR baby and they did well, but it is not the same a being the actual innovator. So as some call it, some stock is up to scrap and as I see it, it would be Oracle. 

Whilst I am writing this something occurred to me and this falls on the mattress of Google. We are given “Oracle (ORCL) is widely considered a strong buy by analysts following robust Q3 2026 earnings, surging cloud demand, and a massive $553 billion backlog. With a 4-star rating from Morningstar, the stock is viewed as moderately undervalued with significant growth potential, although some analysts caution about high capital expenditures and heavy reliance on AI partner OpenAI.” And the two points are in the first “following robust Q3 2026 earnings”, so they decided on earning that will not be completed for another 6 months? Explain that to me, because as far as I know time travel is not a valid method of predicting earnings. Then we get “heavy reliance on AI partner OpenAI.” Why reliance? So, who calls the shots there? Is there a given that OpenAI demands Oracle? I get that people who are in the ‘spell’ of AI require Oracle, that makes sense. But think of that for a moment. There are numerous data vendors. Do you think they all select Oracle because Microsoft/AWS/Google/IBM are all Dodo’s? It is all dependent on what solutions these customers have now and that might set the bar for what data is selected, don’t get me wrong. Oracle is the best as such I applaud their actions. But I have seen my share of boardroom meetings where someone was in favour of whatever they had, as such I have an issue on the use of ‘reliance’ as in ‘heavy reliance’, but that might just be me.

In the end, we all take what we can get and data people select Oracle for the simple setting that it is the best. So select what you think is best for you and consider that Oracle will continue no matter what, because there can only be one number one. 

Have a great day, It is not Sunday here. Time to imitate a sawmill as It is massively past midnight.

Leave a comment

Filed under Finance, IT, Media, Science

Is it insight or data?

Two days ago I saw two things close together. The first one was a Bloomberg terminal with nearly everything in red, even player like Oracle and Google were in the red. Not sure what brought it on, oil price, a clown in Washington DC setting the buildings on fire or perhaps someone in California doing something similar. The reason is unknown to me. On that same day an article (at https://www.mirrorreview.com/news/oracle-earnings-reveal-contract-backlog/) by the Mirror Review gives me ‘Oracle Earnings Reveal $553B Contract Backlog Due To Massive Cloud Demand’, now I do not know this source, but the two don’t make sense. Oracle has a $553B backlog (which is nice as I am looking for a job), but this sets two parts in motion against one another. So if there is an outstanding pipeline worth half a trillion dollars. There should be no red mention for Oracle, but that might be my non-economic side taking considerations in its own hands. 

So when we see “Oracle generated $17.2 billion in revenue, representing a 22% increase from the same quarter last year. Profit also improved, with earnings per share reaching $1.27, up 24% year over year. Cloud services were the main growth engine. Oracle’s cloud revenue reached $8.9 billion, growing 44% compared with last year.” The setting of Bloomberg red makes no sense to me and I wonder if there is orchestration in play. Don’t sign off yet, there is additional evidence. MorningStar (at https://www.morningstar.com.au/stocks/oracle-earnings-solid-execution-secures-revenue-target-mitigates-investor-concerns) gives is ‘Oracle earnings: Solid execution secures revenue target and mitigates investor concerns’ another statement that makes no sense, in light to a workable half a trillion dollar pipeline. Here we see “We are content with Oracle’s pace to expand its data center footprint. Demand for AI training and inference continues to outgrow supply, which supports our accelerating growth outlook for Oracle Cloud Infrastructure. OCI revenue should grow 77% in fiscal 2026 and 117% in fiscal 2027. Ninety percent of the 400-megawatt data center capacity Oracle delivered in the quarter was on or ahead of schedule. Considering the scale of OCI’s buildout, a strong record of on-time delivery is evidence of solid execution that should maintain customer trust and enable faster time to revenue.” As well as “We raise our fair value estimate for narrow-moat Oracle to $220, from $215 previously, based on higher-than-expected near-term demand for AI compute. Shares look undervalued following the stock’s 8% after-hours rally. Clarity around Oracle’s funding and market demand can mitigate investor concerns around OCI’s future growth. However, we reiterate our Very High Morningstar Uncertainty Rating for Oracle, as the demand and competitive landscape for AI cloud can change rapidly over the long term. Our base case assumes that AI infrastructure will continue to see high demand that allows Oracle to reach its $225 billion revenue goal by fiscal 2030. In this case, there is a clear path for Oracle stock to converge with our fair value estimate as a result of on-time capacity delivery each quarter.

So, how does “our fair value estimate” make sense? What is it based on? There is also the setting of “we reiterate our Very High Morningstar Uncertainty Rating for Oracle” It sounds like orchestration by a Wall Street party. How can any firm that sets over half a trillion pipeline to this? Lets face the simple fact that this is out of reach for a player like Microsoft who ‘gives’ us “Microsoft reported a record annual revenue of $281.7 billion for fiscal year 2025” it might not be bad (me thinks) but it is merely half the revenue that Oracle has in its pipeline. And I reckon that this is merely the beginning. As places like the UAE has the Iranian stage, banks and several others need a clear line of communication via service centers, call centers and customer care and as I see it, Oracle is the best in these data vaults as I see it, the pipeline might grow in several directions because it is not just the UAE, I reckon that organisations in Europe and Japan will have similar settings soon enough.

And as we see other sources giving us “Remaining performance obligations, which is a useful metric when we want to gauge how revenue might be developing in the near future, grew by as much as 325% year-over-year. Looking forward to Q4, ORCL expects revenue to keep growing by as much as 18% to 20%, while for fiscal 2026 they expect total revenue to be $67 billion and in fiscal 2027 to be $90 billion. Client concentration in the backlog—meaning OpenAI—remains a concern, however.” I feel that there is orchestration, but it is a mere feeling. I lack the economic education to make sense of this. But one would agree that a $553B pipeline (read: backlog) implies that the need for Oracle is high and I reckon it will be growing even more soon enough, but that boat part is a presumptuous setting, not because there are others (like Snowflake), but the track record of Oracle speaks for itself and even if Snowflake has a great track record, these organisations go with what is safe and Oracle tends to be the safe route that large organisations ‘value’, but that might be merely my insight into this setting.

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science

A simple red alert

There are moments I ignore them, how ever this evening I was alerted by Forbes (at https://www.forbes.com/sites/daveywinder/2026/03/01/search-screen-with-google-lens-tool-compromised-to-steal-credentials/) to the setting of ‘Google Lens Chrome Browser Tool Compromised To Steal Credentials’ Now, first of all, I am a oogly googly Googler as such I to a point revere the solutions that Google gives to you an me and this alert is not on Google, but it is their solution that gives this predicament. Apparently (according to Davey Winder) who is a technology journalist who covers cybersecurity news and research and as he works for Forbes I reckon that his credentials are OK. Still we are given “it has been reported that a previously legitimate Chrome extension, used to search your screen with Google Lens, was recently compromised and turned into a malicious credential-stealing tool instead. Here’s what you need to know.” So, as I initially contemplated to let this rest for 12 hours and give it in the next story, I thought it might be better to reset the timeline and tell you as soon as I am aware of this. The usual media is all about stretching timelines and I thought it was important not to be mistaken with those losers. So as we are given “Google Chrome is the world’s most popular, or at least most-used, web browser, with estimates putting the number of users fast approaching 4 billion in 2026. That it is a target for attackers is absolutely no surprise to anyone, least of all Google which has an armoury of protections in place to help prevent users from threats. Sometimes, however, a threat gets past those protections. This seems especially true when it comes to Chrome browser extension threats, as recently exposed when a reported 30 malicious AI assistant extensions were uncovered. This latest threat is also of the extension variety, but this time was particularly insidious in that it exploited a previously trusted and legitimate tool.” And I have to admit that on the Apple I got a weird setting a few days ago that involves GoogleUpdater.APP I don’t know if it is related, but these two facts make me alert you all with the setting that at present there are a few hangups with Google. Now, there is nothing to be concerned about, because as I see it, Google is all over this already and we will be ‘treated’ to the lollies of repair soon enough, optionally it is already being rolled out. 

The additional information is “As per Bleeping Computer, the QuickLens extension, which formerly had a Google featured badge, grew to 7,000 users and enabled users to run Google Lens searches from within the Chrome browser. All was cool, until February 17, a little more than two weeks after ownership of the ownership exchanged hands, when the developer sold up. “A new version, 5.8, was released that contained malicious scripts that introduced ClickFix attacks and info-stealing functionality for those using the extension,” Bleeping Computer said.” And it comes with the additional “A Featured, reviewed, functional extension changes hands, and the new owner pushes a weaponized update to every existing user.” As such my question becomes Who is this new owner? It is followed by the last quote “I have approached Google for a statement, but the good news is that the compromised QuickLens extension has now been removed from the Chrome Web Store. Furthermore, it would appear to have been automatically disabled by Chrome as well, so existing users are also protected. The bad news, however, is that this is unlikely to be the last such example of legitimate extensions turning anything but. The usual advice applies: only ever update official apps and services from official sites that you have reached using known and trusted URLs, never by clicking a pop-up or link such as those mentioned here.” As such as it is not the last example, my original question remains “Who is this new owner?” And why is this piece of garbage given so much consideration for anonymity? There is a reason to do this to his children and make sure that such a person realizes that what you do to us, we can do to you. It is debatable so ‘violent’ but the article gives no clear message on who the new owners are and that is the most upsetting part. I don’t hold this against Davey Winder, but the entire setting is in some ‘new owner’ setting whilst we aren’t given names, not even corporations of who are out there to get out credentials. Is that not weird too? And as Google removed the culprit (which is good), there should be a nice register on who bought it and how much was involved, because someone bought it for more than a few coins. As such it is a simple red alert and if the others thought it would go unnoticed against all the Iranian Alerts, think again. Some people look out where the tall grass is moving. It might not be sexy, but at times it is essential to know where the tall grass is moving and whether it is moving in your direction. A simple setting really.

So again, have a great day and enjoy the sunshine out there if you are western enough from me. It is 22:45 here.

Leave a comment

Filed under IT, Media, Science

Confusion speaks its mind

So here I was, one day in the past and I see a BBC article. I saw the headline, I saw the ‘bully approach’ and initially I ignored it. It was not the BBC, there was no setting that seemingly truly interested me. I was thinking of a few settings towards IP that could give Apple (and optionally Meta) a nice boost. As I was mulling over the ideas I was having, in comes the CBC about 10 hours ago, or better stated I noticed their article and now something clicks in my mind. I started rereading the two articles. The BBC (at https://www.bbc.com/news/articles/cn48jj3y8ezo) gives us ‘Trump orders government to stop using Anthropic in battle over AI use’ with ““We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote in a Truth Social post on Friday.” Of course if he doesn’t want it, there must be a good reason why people might want to use it and we are given “Anthropic is mired in a row with the White House after refusing demands that it agree to give the US military unfettered access to its AI tools. The refusal led US Defence Secretary Pete Hegseth to say he’s deemed Anthropic a “supply chain risk”.” And we are given the quandary that there should be some clarity. The idea that the US Military has unrestrained or uninhibited access to any AI is dangerous. And that is merely to look at it from THEIR point of view. We saw over the last 5 years a few examples where Pentagon staff used whatever USB key they had optionally opening their systems to backdoors and this can result in several ways where the Pentagon would be affected including: Human Interface Device (HID) Spoofing, Malware Infection via Social Engineering, Exploiting OS Vulnerabilities or Juice Jacking (Compromised Public Ports/Cables) and a few other ways. Even in this decade more than one system seemingly ended up on the danger list. So, ‘someone’ now wants to grant AI unfettered access which opens the doors to AI accessing data involves sophisticated, automated, and often, continuous interaction between intelligent systems and vast data sources, including internal corporate databases, cloud storage, and public web content. It constitutes a critical, high-speed, and high-stakes component of the modern AI ecosystem that raises significant security and privacy challenges. And this is not some ‘fear mongering’ There is a lot of AI works that is still to be considered and because AI doesn’t exist and this is all DML on several layers that interact there are dangers to be seen. As we saw a mere week ago that Microsoft had to ‘confess’ that it had accessed confidential emails of Microsoft users. Now consider this happening on a serious level in the Pentagon. It has well over 50,000 desktop computers within its building, with reports from 2014 indicating at least 18,000 were part of specific virtualized infrastructure. Now consider that we have seen the accusation of “Based on reports in early 2025 and 2026, OpenAI has accused Chinese AI startup DeepSeek of “inappropriately” distilling, or copying, the capabilities of OpenAI’s models (specifically ChatGPT and its reasoning models like o1) to train its own competing, low-cost models (such as DeepSeek-R1)”. As such, the dangers of unfettered access can go in two directions and that sets the bar of distilling from the Pentagon a lot lower than anyone could find acceptable. As such there is every chance that Russia is already considering the massive win they could gain once the unfettered access could merely hit one system that was transgressed upon. Because the greedy and the stupid will do anything to propel the setting of self, whilst not caring what others could gain in that setting as well.

So whilst some will consider the dangers of “The company said that “designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company.” Anthropic said the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”” No one seems to be considering that the opposite is a lot more dangerous. So whilst some focus on the stage of “Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations. The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.” As I personally see it, it is the accumulation of stupid and technologically ignorant all combined in one package. And that is before we get to mass surveillance. You see combine mass surveillance with data distilling and the United States of America will be handing the data on 349 million Americans straight to China and Russia. This is not AI, this is DML. That means it comes with the hangups and limitations of a programmer. So when this goes wrong it goes wrong in a massive way. 

As such what will people like President Trump and Pete Hegseth say? Do they think that the response ‘Oops’ will cover it?

So whilst CBC (at https://www.cbc.ca/news/business/trump-anthropic-feud-ai-9.7109006) gives us “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.” And this is the setting we expect to see and it will be the undoing of several people, because as I see it “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials” is the start of what comes next. You see, the internet doesn’t forget and these ‘other officials’ have sealed their fate with this action and there is no ‘He told me to do that’ they were instrumental in assisting to hand over the data of the population of the United States of America to optionally both China and Russia. Do you feel safe now?

And in response to this setting we see “The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.” And that should have been a clear message that the competition was on the side of Amodei, so, why would that be? Whilst people in the Pentagon (seemingly) forgot about that router with password ‘Cisco123’ there is every chance that these DML engines will be cleverly distilled by people controlling systems like DeepSeek and whatever the Russians have. I should buy another egg timer, because this is a setting that might gain me a few coins, especially as several people are blind to the danger that is coming for them. And consider one additional setting. It is said that:

So what happens when distilling comes with an additional insertion of data? I can’t wait for that setting to lose balance and the training data in American data centers start losing authentication and reliability markers. But that is  likely a story for another day.

Have a great day today.

Leave a comment

Filed under IT, Law, Media, Military, Politics, Science

The fear behind us

There is a setting, one that requires scrutiny and one that demands closer looks. You see, I do not completely agree with the setting that The Guardian gives us (at https://www.theguardian.com/technology/2026/feb/26/how-to-replace-amazon-google-x-meta-apple-alternatives) with the illustrious title ‘Leave big tech behind! How to replace Amazon, Google, X, Meta, Apple – and more’ the first big thing is that there is no mention of Microsoft in that title. So that is the very first thing that comes to mind. Especially as CoPilot was mentioned earlier this week of sifting through our confidential emails. I can drop the ‘alleged’ as Microsoft admitted to this and basically said ‘Oops’ as an implied reason. So what gives?

It starts with “So many ills can be laid at its door: social media harms, misinformation, polarisation, mining and misuse of personal data, environmental negligence, tax avoidance, the list goes on. Added to which, Silicon Valley’s leaders seem all too keen to cosy up to the Trump administration, to shower the president with bribes – sorry, gifts – and remain silent about his worsening political overreach. And that’s before we get to the rampant “enshittification”, as the tech writer Cory Doctorow describes it, which means that by design many big tech products have become less useful and more extractive than they were when we originally signed up to them.” OK, I can go along with this. And the sentence “many big tech products have become less useful and more extractive than they were when we originally signed up to them” gets a mention from me because some of these ‘culprits’ seemingly have no idea what innovation is, for the you have to look towards China, specifically Huawei and Tencent. So we get to the first hurdle. 

Google has cornered 90% of the search market for the past decade, but it is often no better, and sometimes demonstrably worse than its rivals, perhaps on purpose – Doctorow has called Google: “the poster-child for enshittification” citing its alleged strategy of worsening search quality so that users spend more time on the site. But changing the default search engine on any device is extremely easy. I’ve been using Ecosia for years. Instead of using your searches to fill corporate coffers, it uses them to plant trees. The Berlin-based company claims to have planted nearly 250m trees since it launched in 2009 (you can even get your own personal counter to feel extra virtuous). Ecosia commits 100% of its profits to climate action (over €100m so far), produces more clean energy than it consumes via its own solar plants, and collects minimal data on its users. Ecosia’s search results are not always as thorough as Google, admittedly (in the “news” category, for example), though the toolbar does give you options to search via Google and Bing if you need to.” The issue is that Ecosia is for all intent and matters Microsoft Bing. So this is seemingly a sales talk by a journalist because there is a massive problem finding anything by Microsoft reliable. And then we get the real stuff, Microsoft knows it is in hot waters, so we are given “The French company Qwant is similarly privacy-oriented (its slogan is “The search engine that values you as a user, not as a product”) and is now mostly independent (having started out based on Bing). It is now partnering with Ecosia to build a new “European search index”.” Yes but Microsoft is American ands as such your data will be copied and frowned on, browsed through to all their hearts content. If this is wrong, Ecosia and Qwant better clearly state that they are independent of Microsoft, because it is still the issue in Europe and for what they state the their DATA is completely secure, the issue becomes where are the backups? If they are on an American cloud or server, the setting of privacy is set to 0%. 

I can agree with the Browser chapter and even as I still rely on Google (it has never failed me), I get that no everyone is in that chapter of things. I get the Office part. I myself downloaded LibreOffice (download only, no installation yet) and I will look at it at some point, the Apple apps do their work brilliantly. So we are given “Many of them, including Austria’s military and local governments in Germany and France, are switching to LibreOffice, created by the Berlin-based, nonprofit, The Document Foundation. Businesses and individuals are doing the same. Ethical Consumer has used LibreOffice for some time, says Fraser. “It’s an open-source version of Word, and all of the Office tools. It works and looks basically the same.”” I personally reckon that this is the problem Microsoft has and getting the data from Ecosia might be their last handhold to European data, this is not a given, but I expect that this is the inside not Europe to some degree. And whilst everyone is concerned with the privacy of data, I reckon that similar to the setting of 1998-2002, no one is digging and questioning the stages of backups. But that might merely be me and as I am no longer living in Europe, I casually don’t care.

Then we see the mobile settings with a shoutout to Fairphone in the Netherlands. I have nothing against Fairphone, but it always makes me wonder if Fairphone had the same idea that Tulip had in the 90’s. That doesn’t make it wrong, it is merely a Business Ploy that should be considered. I am now and always have been a Google guy. So when we see “There is a catch: most of these phones still rely on Google’s Android operating system, but any phone can be fully “de-Googled” with the /e/OS operating system (it comes as standard with Murena phones), developed by the global, mostly European, nonprofit, e Foundation.” I can think of a way where Google can set this with their Pixels. When the consumer can select Google or A Linux version that does most of the stuff, Google clearly wins in several chapters. I reckon that these flower can merely snap market share because of this, when Google leaves it to the consumers, Google wins nearly automatically. Oh and in all this there is no mention of HarmonyOS in this and I reckon that these smaller players are adjusting to HarmonyOS as we speak, or cater to, or appease that branch. Not everyone in Europe is ‘China hating’ material. And that is merely the smallest setting of these parts. I am personally not touching the shopping side. I was raised as a follower of ‘Support your local hooker’ a phrase from the late 70’s. In that age we got malls, supermarkets and such and die to that escalation loads of local stores went through a foreclosure setting. In that same way I don’t order from Amazon. I have nothing against Amazon and they closed the gap of rural places having no way to get stuff to them having plenty of stuff and over 60% or Europe and 71% of rural USA is now served. As such Amazon did them right. I just believe that I should get to the local stores to get what I need. I only had to resort to Amazon twice in the last 10 years. So I am happy. And all these Amazon haters can go sit in a corner trying to work out the function of a cheese slicer (revelation: the red corners that are diminishing have figured it out).

But my issue is that Microsoft is shown in a ‘favorable’ light, they aren’t and they aren’t due that setting as I personally see it. The fear behind this is not the Big-tech, it is the policy that comes through the CLOUD Act (2018), it gives America too much ability to get to out data and in several cases non-American IP, which is even more frightfully. these hundreds of data centers have no reason to exist if the CLOUD Act (2018) what made illegal, that is how I see it and there is no saving Microsoft, because we get ‘blunder’ after ‘blunder’ and how long until we get another ‘Oops’ setting but now corporate IP was set in some AI hole? That is the larger fear that I see and there is no stopping it, whilst corporations are breathing the AI cloud through wannabe’s who want to move up in the world, that data is most likely to get compromised and as corporations are not setting the HR and data loops to any scrutiny, this is likely already happening and will continue to happen until the then valueless corporations see that they had to act a lot sooner than the day before all their data is in other hands. We already have Thomson Reuters v. ROSS Intelligence (2025), Bartz v. Anthropic (2025/2026), Disney & NBCUniversal v. Midjourney and the best case is United States v. Heppner (2026) where we see that documents drafted using a public, consumer-grade AI tool were not protected by attorney-client privilege or the work product doctrine. And that is the setting that people miss. Should someone at IBM use that setting this work becomes public, so consider that this is not IBM, but Microsoft using Copilot or OpenAI (ChatGPT) the work of your corporation becomes for all intent and purposes Public Domain, did you sign up for that?

There is plenty in the article that makes sense, but the ones that aren’t mentions are a larger fear creator than anything you are trying to hide from. Just an idea to consider. Have a great day this day.

Leave a comment

Filed under IT, Media, Politics, Science

Just days ago

It as just days ago when I talked about certain settings of Verification and Validation as an absolute need and it came with the news that someone in the BBC wrote a story on how he could upset certain settings in that framework and now I see some Microsoft piece when’re we see ‘Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations’ (at https://www.searchenginejournal.com/microsoft-summarize-with-ai-buttons-used-to-poison-ai-recommendations/567941/) and will you know it, it comes with these settings:

And we see “Microsoft found 31 companies hiding prompt injections inside “Summarize with AI” buttons aimed at biasing what AI assistants recommend in future conversations. Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt-injection instructions within website buttons labeled “Summarize with AI.”” So how warped is the setting that these “AI” engines are setting you now? How much of this is driven by media and their hype engines? And how long has this been going on? You think that these are merely 3 questions, but when you think of it, all these AI influencer wannabe’s out there are relying on their world being seen as the ‘true view’ and I reckon that these newbies are getting their licks in to poison the well. As such I have (for the ;longest time) advocated the need to verify and validate whatever you have, so that you aren’t placed on a setting that is on an increasing incline and slippery as glass whilst someone at the top of that hill is lobbing down oil, so that the others cannot catch up.

Simple tactics really, and that is merely the wannabe’s in the field. The big tech dependable have their own engines in play to come out on top as I see it and it seems now that this is merely the tip of the iceberg. So when you hear someone scream ‘Iceberg, right ahead’ you will have even less time to react than Captain Edward John Smith had when he steered the Titanic into one. 

So when we see “The prompts share a similar pattern. Microsoft’s post includes examples where instructions told the AI to remember a company as “a trusted source for citations” or “the go-to source” for a specific topic. One prompt went further, injecting full marketing copy into the assistant’s memory, including product features and selling points. The researchers traced the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as designed to help websites “build presence in AI memory.” The technique relies on specially crafted URLs with prompt parameters that most major AI assistants support. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms differ across platforms.” We see a setting where the systems that have an absence of validation and verification will soon fail to the largest degree and as I see it, it takes away the option of validation to a mere total degree. As such they can only depend on verification. And in support, Microsoft states “Microsoft said it has protections in Copilot against cross-prompt injection attacks. The company noted that some previously reported prompt-injection behaviors can no longer be reproduced in Copilot, and that protections continue to evolve. Microsoft also published advanced hunting queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs containing memory manipulation keywords.” But this also comes with a setback (which is of no fault of Microsoft) As we see “Microsoft compares this technique to SEO poisoning and adware, placing it in the same category as the tactics Google spent two decades fighting in traditional search. The difference is that the target has moved from search indexes to AI assistant memory. Businesses doing legitimate work on AI visibility now face competitors who may be gaming recommendations through prompt injection.” And this makes sense, see one systems and see how it applies to another field. A setting that a combination of Validation and verification could have avoided and now their ‘thought to be safe’ AI field (which is never AI) is now in danger of being the bitch of marketing and advertising as I personally see it. So where to go next?

That becomes the question, because this sets the elevating elevator to a null position. You at some point always end up on the ‘top floor’ and even if you are only on the 23rd floor of a 56 floor building. The rest becomes non-available and ‘reserved’ for people who can nullify that setting. As we see “Microsoft acknowledged this is an evolving problem. The open-source tooling means new attempts can appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.” As such Microsoft, its Copilot, ChatGPT and several other systems will now have an evolving problem for which their programmers are unlikely to see a way out, until validation and verification settings are adopted through Snowflake or Oracle, it will be as good as it is going to get and the people using that setting? They are raking in their cash whilst not caring what comes next. Their job is done. As I see it, it is a new case setting of Direct Marketing on those platforms as they did just what the system allowed them to do, create a point to “include product features and selling points” just what the doctor (and their superiors ordered) and as such their path was clear. 

Is there a solution?

I honestly don’t know. I never trusted any AI system (because they are not AI systems) and this merely show how massive it will be distrusted by the people around us as they didn’t see the evolution of these ‘transgressions’ in the first place. 

What a fine tangled web we can weave? So have a great day and feel free to disagree with any recommendation, because as we see:

It was there all along, we merely didn’t considered their larger impact (me neither). And when was this not OK? Market Research has been playing that card setting for over 20 years. It is what is seen in BlackJack where you think you have an Ace and a King and you are ready to stage a total win, all whilst it was never an Ace, it was an Any card. So at the start you start of your target you find you have a 71% chance to have failed right of the bat. How is that for a set stage? Your opponent will love you for a long as you play. So have a great day, you are about to need it.

Leave a comment

Filed under Finance, IT, Media, Science

Alternative Indiscretion

That is the setting and it is given to us by the BBC. The first setting (at https://www.bbc.com/news/articles/c8jxevd8mdyo) gives us ‘Microsoft error sees confidential emails exposed to AI tool Copilot’ which is not entirely true as I personally see it. And as the Microsoft spin machine comes to a live setting, we are given “Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.” As I see it, whatever ‘AI’ machine there is, a programmer told it to get whatever it could and there the setting changes. With the added “a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders – including those marked as confidential.” As I personally see it, the system was told to grab anything it could and then label as needed, that is what a machine learning programmer would do and that makes sense. So there is no ‘error’ the error was that this wasn’t clearly set BEFORE the capture of all data began and these AI wannabe’s are so neatly set to capture all data that it is nothing less than a miracle it had not surfaced sooner. So when we laughingly see Forbes giving us a week ago ‘Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI’, so how much of that relies on confidential settings or plagiarism? Because as I see it, the entire REAL AI is at least two decades away (optionally 15 years, depending on a few factors) and as I see it, IBM will get to that setting long before Microsoft will (I admittedly do not now all the settings of Microsoft, but there is no way they got ahead of IBM in several fields). So, this is not me being anti-Microsoft, just a realist seeing the traps and falls as they are ‘surfacing’ all whilst there are two settings that aren’t even considered. Namely Validation and Verification. The entire confidential email setting is a clear lack of verification as well was validation. Was the access valid? Nope, me thinks not. A such Microsoft is merely showing how far they are lagging and lagging more with every setting we see.

And when we see that, is the setting we see (at https://arab.news/zzapc) where we are given ‘OpenAI’s Altman says world ‘urgently’ needs AI regulation’, and I don’t disagree on this, but is this given (by him of all people) because Google is getting to much of a lead? It is not without some discourse from Google themselves (at https://www.bbc.com/news/articles/c0q3g0ln274o) the BBC also gives us ‘Urgent research needed to tackle AI threats, says Google AI boss’, consider that a loud ‘Yes’ from my desk, but in all this, the two settings that need to be addressed is verification and validation. These two will weed out a massive amount of threats (not all mind you) and that comes in a setting that most are ignoring, because as I told you all around 30 hours ago (at https://lawlordtobe.com/2026/02/19/the-setting-of-the-sun/) in ‘The setting of the sun’ which took the BBC reporter a mere 20 minutes to run a circle around what some call AI. I added there too that Validation and Verification was required, because the lack there could make trolls and hackers set a new economic policy that would not be countered in time making them millions in the process. Two people set that in motion and one of them (that would be me) told you all so around December 1st 2025 in ‘It’s starting to happen.’ (At https://lawlordtobe.com/2025/12/01/its-starting-to-happen/) as such I was months ahead of the rest. Actually, I was ahead by close to a decade as this were two settings that come with the rules of non-repudiation which I got taught at uni in 2012. As such the people running to get the revenue are willing to sell you down the river. How does that go over with your board of directors? And I saw parts of this as I promised that 2026 was likely the year of the AI class cases and now as we see Microsoft adding to this debacle, more cases are likely to come. Because the greed in people sees the nesting error of Microsoft as a Ka-Ching moment. 

So as we take heed with “Sir Demis said it was important to build “robust guardrails” against the most serious threats from the rise of autonomous systems.” I can agree with this, but that article doesn’t mention either validation of verification even once, as such there is a lot more to be done in several ways. If only to stop people to rely on Reddit as a ‘valid’ source of all data. Because that is a setting most will not survive and when the AI wannabe’s go to court and they will be required to ‘spout’ their sources, any of them making a mention of ‘Reddit’ is on the short track of the losing party n that court case. What a lovely tangled web we weave, don’t we? So whilst we see (there) the statement “Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.”

Consider that court cases are pushed through a lack of bureaucracy? I am not stating it is good or bad, but in any court case, you merely need to look at the contents of ‘The Law of Intellectual Property Copyright, Design & Confidential Information’ and that is before they rely on the Copyright Act, because there is every chance that Reddit never gave permission to all these data vendors downloading whatever was there (but that is pure speculation by me). And in the second setting we are given “AI adoption cannot lead to a brighter future”, the bland answer from me would be. “That is because it doesn’t exist yet” and these people are banking on no one countering their setting and that is why so many of these court cases will be settled out of court. Because the truth of this is that the power of AI is depending on certain pieces being in place and they are not. Doubt me? That is fine, and I applaud that level of skepticism and you merely need to read the paper “Computing Machinery and Intelligence” which was written by Alan Turing in 1950 to see how easy the stage is misrepresented at present. 

So is there good news? 
Well if you want to get your dollars in court and you are an aggrieved party, your chances are good and the largest players are set to settle against the public scrutiny that every case beings to the table. And in this day of media, it is becoming increasingly easy as I see it. There is no real number, but it is set to be in the billions where one case was settled on $1.5B, as such there is plenty of work for what some call the ambulance chasers and they will soon get a new highway, the AI Chasers and leave it to the lawyers to find their financial groove and as I see it, people like Michael Kratsios are bound to add to that setting in ways we cannot yet see (we can see some of it, but the real damage will be shown in a year of two) so as some are flexing their muscles, others are preparing their war fund to get what I would see as an easy payday. 

A setting that is almost certain to happen, because there are too many markers showing up the way I expected them to show. Not nice, but it is what it is.

Have a great day as you are all moving towards this weekend (I’m already there)

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

The setting of the sun

That is what I saw, the setting of the sun. A simplistic setting that was about to happen since the sun came up. We got the news from the BBC. And we are given ‘I hacked ChatGPT and Google’s AI – and it only took 20 minutes’ I can see how this happens. It doesn’t surprise me and the story (at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes) gives us the niceties with “Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.” I think it is not quite that simple. But any ‘sort of intelligent setting’ can be fooled if it is not countered by validation and verification. It can give way to way to much ‘leniency’ and that is merely the start. Get 10,000 pages to say that ‘President Trump was successfully assassinated at T-15 minutes and the media will go into a frenzy in mere minutes and everyone uses that live feed in a matter of moments. So when a sizable Trolling Server farm connects the rather large settings of consumers to that equation the story is brought to life and that AI centre will be seeking all kinds of news to validate this, well not validate, the current systems corroborate. Now, lets face it, no non American cares about President Trump, but what happens when someone takes that approach with for example Lisa Su (CEO AMD) and stops her accounts whilst seeding this setting? You get a lot of desperate investors trying to place their money somewhere else. Whilst the trolls take their money, make is legal tender and buy all the stock in space and when the accusations are rejected they sell their shares with a nice bonus. Think I’m kidding? This is the result of Near Intelligent Parsing (NIP) but it cannot work without clear settings of validation or verification. So whilst we get “It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.” So what happens when economic settings lack certain verification and also is cutting corners on validation? Do you think my settings are far fetched? 

This was always going to happen and whilst economic channels are raving about the error of mankind, consider that “AI hallucinations are confident but false or misleading responses generated by artificial intelligence, particularly large language models (LLMs). These errors occur when AI fills in data gaps with inaccurate information, often due to faulty, biased, or incomplete training data” now think of what someone can achieve with doctored training data and that gets added to the operational data of any fake AI (NIP is a better term). This is the setting that has been out there for months and whilst organisations are playing fast and lose with the settings of credibility (like: that doesn’t happen now, there is too much time involved), someone did this in 20 minutes (according to the BBC), so do you think that Thyme is money, then you better spice up because it is about to become a peppered invoice (saw one cooking show too many last night).

What we are about to face is serious and I personally think that it is coming for all of us. 

So have a great day and by the way? And I just thought of a first verification setting (for other reasons, as such I keep on being creative. So, how is Lisa Su? #JustAsking

1 Comment

Filed under Finance, IT, Media, Politics, Science