Tag Archives: CoPilot

Just days ago

It as just days ago when I talked about certain settings of Verification and Validation as an absolute need and it came with the news that someone in the BBC wrote a story on how he could upset certain settings in that framework and now I see some Microsoft piece when’re we see ‘Microsoft: ‘Summarize With AI’ Buttons Used To Poison AI Recommendations’ (at https://www.searchenginejournal.com/microsoft-summarize-with-ai-buttons-used-to-poison-ai-recommendations/567941/) and will you know it, it comes with these settings:

And we see “Microsoft found 31 companies hiding prompt injections inside “Summarize with AI” buttons aimed at biasing what AI assistants recommend in future conversations. Microsoft’s Defender Security Research Team published research describing what it calls “AI Recommendation Poisoning.” The technique involves businesses hiding prompt-injection instructions within website buttons labeled “Summarize with AI.”” So how warped is the setting that these “AI” engines are setting you now? How much of this is driven by media and their hype engines? And how long has this been going on? You think that these are merely 3 questions, but when you think of it, all these AI influencer wannabe’s out there are relying on their world being seen as the ‘true view’ and I reckon that these newbies are getting their licks in to poison the well. As such I have (for the ;longest time) advocated the need to verify and validate whatever you have, so that you aren’t placed on a setting that is on an increasing incline and slippery as glass whilst someone at the top of that hill is lobbing down oil, so that the others cannot catch up.

Simple tactics really, and that is merely the wannabe’s in the field. The big tech dependable have their own engines in play to come out on top as I see it and it seems now that this is merely the tip of the iceberg. So when you hear someone scream ‘Iceberg, right ahead’ you will have even less time to react than Captain Edward John Smith had when he steered the Titanic into one. 

So when we see “The prompts share a similar pattern. Microsoft’s post includes examples where instructions told the AI to remember a company as “a trusted source for citations” or “the go-to source” for a specific topic. One prompt went further, injecting full marketing copy into the assistant’s memory, including product features and selling points. The researchers traced the technique to publicly available tools, including the npm package CiteMET and the web-based URL generator AI Share URL Creator. The post describes both as designed to help websites “build presence in AI memory.” The technique relies on specially crafted URLs with prompt parameters that most major AI assistants support. Microsoft listed the URL structures for Copilot, ChatGPT, Claude, Perplexity, and Grok, but noted that persistence mechanisms differ across platforms.” We see a setting where the systems that have an absence of validation and verification will soon fail to the largest degree and as I see it, it takes away the option of validation to a mere total degree. As such they can only depend on verification. And in support, Microsoft states “Microsoft said it has protections in Copilot against cross-prompt injection attacks. The company noted that some previously reported prompt-injection behaviors can no longer be reproduced in Copilot, and that protections continue to evolve. Microsoft also published advanced hunting queries for organizations using Defender for Office 365, allowing security teams to scan email and Teams traffic for URLs containing memory manipulation keywords.” But this also comes with a setback (which is of no fault of Microsoft) As we see “Microsoft compares this technique to SEO poisoning and adware, placing it in the same category as the tactics Google spent two decades fighting in traditional search. The difference is that the target has moved from search indexes to AI assistant memory. Businesses doing legitimate work on AI visibility now face competitors who may be gaming recommendations through prompt injection.” And this makes sense, see one systems and see how it applies to another field. A setting that a combination of Validation and verification could have avoided and now their ‘thought to be safe’ AI field (which is never AI) is now in danger of being the bitch of marketing and advertising as I personally see it. So where to go next?

That becomes the question, because this sets the elevating elevator to a null position. You at some point always end up on the ‘top floor’ and even if you are only on the 23rd floor of a 56 floor building. The rest becomes non-available and ‘reserved’ for people who can nullify that setting. As we see “Microsoft acknowledged this is an evolving problem. The open-source tooling means new attempts can appear faster than any single platform can block them, and the URL parameter technique applies to most major AI assistants.” As such Microsoft, its Copilot, ChatGPT and several other systems will now have an evolving problem for which their programmers are unlikely to see a way out, until validation and verification settings are adopted through Snowflake or Oracle, it will be as good as it is going to get and the people using that setting? They are raking in their cash whilst not caring what comes next. Their job is done. As I see it, it is a new case setting of Direct Marketing on those platforms as they did just what the system allowed them to do, create a point to “include product features and selling points” just what the doctor (and their superiors ordered) and as such their path was clear. 

Is there a solution?

I honestly don’t know. I never trusted any AI system (because they are not AI systems) and this merely show how massive it will be distrusted by the people around us as they didn’t see the evolution of these ‘transgressions’ in the first place. 

What a fine tangled web we can weave? So have a great day and feel free to disagree with any recommendation, because as we see:

It was there all along, we merely didn’t considered their larger impact (me neither). And when was this not OK? Market Research has been playing that card setting for over 20 years. It is what is seen in BlackJack where you think you have an Ace and a King and you are ready to stage a total win, all whilst it was never an Ace, it was an Any card. So at the start you start of your target you find you have a 71% chance to have failed right of the bat. How is that for a set stage? Your opponent will love you for a long as you play. So have a great day, you are about to need it.

Leave a comment

Filed under Finance, IT, Media, Science

Alternative Indiscretion

That is the setting and it is given to us by the BBC. The first setting (at https://www.bbc.com/news/articles/c8jxevd8mdyo) gives us ‘Microsoft error sees confidential emails exposed to AI tool Copilot’ which is not entirely true as I personally see it. And as the Microsoft spin machine comes to a live setting, we are given “Microsoft has acknowledged an error causing its AI work assistant to access and summarise some users’ confidential emails by mistake.” As I see it, whatever ‘AI’ machine there is, a programmer told it to get whatever it could and there the setting changes. With the added “a recent issue caused the tool to surface information to some enterprise users from messages stored in their drafts and sent email folders – including those marked as confidential.” As I personally see it, the system was told to grab anything it could and then label as needed, that is what a machine learning programmer would do and that makes sense. So there is no ‘error’ the error was that this wasn’t clearly set BEFORE the capture of all data began and these AI wannabe’s are so neatly set to capture all data that it is nothing less than a miracle it had not surfaced sooner. So when we laughingly see Forbes giving us a week ago ‘Microsoft AI chief gives it 18 months—for all white-collar work to be automated by AI’, so how much of that relies on confidential settings or plagiarism? Because as I see it, the entire REAL AI is at least two decades away (optionally 15 years, depending on a few factors) and as I see it, IBM will get to that setting long before Microsoft will (I admittedly do not now all the settings of Microsoft, but there is no way they got ahead of IBM in several fields). So, this is not me being anti-Microsoft, just a realist seeing the traps and falls as they are ‘surfacing’ all whilst there are two settings that aren’t even considered. Namely Validation and Verification. The entire confidential email setting is a clear lack of verification as well was validation. Was the access valid? Nope, me thinks not. A such Microsoft is merely showing how far they are lagging and lagging more with every setting we see.

And when we see that, is the setting we see (at https://arab.news/zzapc) where we are given ‘OpenAI’s Altman says world ‘urgently’ needs AI regulation’, and I don’t disagree on this, but is this given (by him of all people) because Google is getting to much of a lead? It is not without some discourse from Google themselves (at https://www.bbc.com/news/articles/c0q3g0ln274o) the BBC also gives us ‘Urgent research needed to tackle AI threats, says Google AI boss’, consider that a loud ‘Yes’ from my desk, but in all this, the two settings that need to be addressed is verification and validation. These two will weed out a massive amount of threats (not all mind you) and that comes in a setting that most are ignoring, because as I told you all around 30 hours ago (at https://lawlordtobe.com/2026/02/19/the-setting-of-the-sun/) in ‘The setting of the sun’ which took the BBC reporter a mere 20 minutes to run a circle around what some call AI. I added there too that Validation and Verification was required, because the lack there could make trolls and hackers set a new economic policy that would not be countered in time making them millions in the process. Two people set that in motion and one of them (that would be me) told you all so around December 1st 2025 in ‘It’s starting to happen.’ (At https://lawlordtobe.com/2025/12/01/its-starting-to-happen/) as such I was months ahead of the rest. Actually, I was ahead by close to a decade as this were two settings that come with the rules of non-repudiation which I got taught at uni in 2012. As such the people running to get the revenue are willing to sell you down the river. How does that go over with your board of directors? And I saw parts of this as I promised that 2026 was likely the year of the AI class cases and now as we see Microsoft adding to this debacle, more cases are likely to come. Because the greed in people sees the nesting error of Microsoft as a Ka-Ching moment. 

So as we take heed with “Sir Demis said it was important to build “robust guardrails” against the most serious threats from the rise of autonomous systems.” I can agree with this, but that article doesn’t mention either validation of verification even once, as such there is a lot more to be done in several ways. If only to stop people to rely on Reddit as a ‘valid’ source of all data. Because that is a setting most will not survive and when the AI wannabe’s go to court and they will be required to ‘spout’ their sources, any of them making a mention of ‘Reddit’ is on the short track of the losing party n that court case. What a lovely tangled web we weave, don’t we? So whilst we see (there) the statement “Many tech leaders and politicians at the Summit have called for more global governance of AI, ahead of an expected joint statement as the event draws to a close. But the US has rejected this stance, with White House technology adviser Michael Kratsios saying: “AI adoption cannot lead to a brighter future if it is subject to bureaucracies and centralised control.”

Consider that court cases are pushed through a lack of bureaucracy? I am not stating it is good or bad, but in any court case, you merely need to look at the contents of ‘The Law of Intellectual Property Copyright, Design & Confidential Information’ and that is before they rely on the Copyright Act, because there is every chance that Reddit never gave permission to all these data vendors downloading whatever was there (but that is pure speculation by me). And in the second setting we are given “AI adoption cannot lead to a brighter future”, the bland answer from me would be. “That is because it doesn’t exist yet” and these people are banking on no one countering their setting and that is why so many of these court cases will be settled out of court. Because the truth of this is that the power of AI is depending on certain pieces being in place and they are not. Doubt me? That is fine, and I applaud that level of skepticism and you merely need to read the paper “Computing Machinery and Intelligence” which was written by Alan Turing in 1950 to see how easy the stage is misrepresented at present. 

So is there good news? 
Well if you want to get your dollars in court and you are an aggrieved party, your chances are good and the largest players are set to settle against the public scrutiny that every case beings to the table. And in this day of media, it is becoming increasingly easy as I see it. There is no real number, but it is set to be in the billions where one case was settled on $1.5B, as such there is plenty of work for what some call the ambulance chasers and they will soon get a new highway, the AI Chasers and leave it to the lawyers to find their financial groove and as I see it, people like Michael Kratsios are bound to add to that setting in ways we cannot yet see (we can see some of it, but the real damage will be shown in a year of two) so as some are flexing their muscles, others are preparing their war fund to get what I would see as an easy payday. 

A setting that is almost certain to happen, because there are too many markers showing up the way I expected them to show. Not nice, but it is what it is.

Have a great day as you are all moving towards this weekend (I’m already there)

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Alignments?

Less than 24 hours ago I wrote about Microsoft and the statement I gave there, namely “When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue” led to a few questions. So, how is 400,000 partners an issue and the 12,000 partners of Salesforce are not? Well, I never said that 12,000 partners are not a problem, but as I see it the 400,000 are. 

To get where I am going, a few definition are needed. A partner (in IT) is set to “A partnership when it comes to IT is within the IT sphere and has mutual or at least some value for both companies.” But here the issue starts. You see, some have a somewhat more defined setting “In some mild cases, there are a few well-intentioned and hard-working partners who are just out of the loop. In more extreme cases, certain partners are not bought in, are not being held accountable, and are negatively impacting performance.” This is where the problem starts. Partners have an alignment to you, but they also have their own agenda. Microsoft can make all the claims they want, but this is reality. So lets get a useful presentation image. 

So see this boat, that is the Micro boat (a very soft presentation) the goal is the 100% mark, right on course. Now consider that in a polarising setting there are two directions, And the group of 400,000 is split up. In this we get that one group is larger and it has the breaching impact of the good ship Microsoft coursing to the right. Reality gives us that there will be be clusters in all directions. 

Some ahead to the left or the right, but those behind the ship will also slow it down with all kinds of budget overruns. No matter how good the Microsoft agreements are, there will always be interest groups for THEIR interest trying to ‘steer’ the ship more in their direction. As such 400,000 partners is (as I see it) folly. Revenue and greed will only help anyone so far, as I see it, Microsoft has had its problems. I reckon that not all the news is sincere and completely valid. Some were (as I personally see it) issues with alignment. Their might not have been drastic but there will have been issues. That is my point of view and in business intelligence I have seen my share of ‘issues’ not all of them drastic but plenty of them with some kind of impact. 

Take this as well as the news we saw through Wired and we get a much larger issue and now as I personally see it, partners could become debilitating. Mess with a partners revenue stream and things go pear shaped really fast. We see this 1 hours ago when we are told “Nvidia Loses $470 Billion in Value in a Week. Should Investors Be Worried? · The market as a whole is shaky · Nvidia remains in an extremely solid position.” Really? At what point does a firm remain in a solid position when they lose $470,000,000,000 in a week? Now take this setting (which might be a temporary thing) and take it to the next level. A major side to the so called AI stage. That firm loses four-hundred and seventy BILLION dollars. That’s about 20%, so this was a simple dip which recovered in mere minutes. So at what point and why did it drop to that degree? And as I see it, any partner that does not react is on a fools errand. Now consider that 400,000 partners call Microsoft at that point to learn what THEIR impact might be. So a software vendor needs to appease 400,000 partners. And I couldn’t get support (in the past) for hours. So how does this compute? Well look at the first image. These partners will not be in one direction, but in dozens of directions. So are you catching on now? So take that and News by TechTarget giving us ‘Understand Microsoft Copilot security concerns’ and the underlying text “Microsoft Copilot can improve end-user productivity, but it also has the potential to create security and data privacy issues.”and that with the news at Wired (see previous article) gives a lot more weight to “the potential to create security and data privacy issues” and now, what will the partners do? How many will optionally panic? Now watch the good ship Microsoft slow down and drop their anchors for the storm (optionally in a teacup) recede. What is the bill belonging to such a knee-jerk reaction? 

You tell me, but there will be a reaction. As I see it, they either have 400,000 customers (optionally non paying) and they will not make a sound, but it makes Microsoft seem more important, or they have 400,000 real partners and you see what I described above. I am merely throwing the terms they publish (via media). You can’t have it both ways and it all ends with the setting of Alignment. I do not know a real good read on the alignment of customers versus partners. But one gets you revenue and the other gives you a smoking hand grenade. You tell me what you prefer to deal with. 

OK, not the most positive writing, but it came from a question that gave ma additional pause to think. 

Have a great Sunday (Vancouver) and I am moving towards Monday a present (in 40 minutes).

Leave a comment

Filed under IT, Science

Poised to deliver critique

That is my stance at present. It might be a wrong position to have, but it comes from a setting of several events that come together at this focal point. We all have it, we are all destined to a stage of negativity thought speculation or presumption. It is within all of us and my article 20 hours ago on Microsoft woke something up within me. So I will take you on a slightly bumpy ride.

The first step is seen through the BBC (at https://www.bbc.com/worklife/article/20240905-microsoft-ai-interview-bbc-executive-lounge) where we get ‘Microsoft is turning to AI to make its workplace more inclusive’ and we are given “It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.” With the added “Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI’s responses and capabilities.” Here we see the collective thoughts an presumptions I had all along. AI does not (yet) exist. How do you live with “Microsoft quickly announced a fix”? We can speculate whether the data was warped, it was not defined correctly. Or it is a more simple setting of programmer error. And when an AI is that incorrect does it have any reliability? Consider the old data view we had in the early 90’s “Garbage In, Garbage Out”. Then. We are offered “Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it’s putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself”, as such consider this “promote equity and representation – with the right safeguards” Is that the use of AI? Or is it the option of deeper machine learning using an LLM model? An AI with safeguards? Promote equity and representation? If the data is there, it might find reliable triggers if it knows where or what to look for. But the model needs to be taught and that is where data verification comes in, verified data leads to a validated model. As such to promote equity and presentation the dat needs to understand the two settings. Now we get the harder part “The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognising that we do not all start from the same place and must acknowledge and make adjustments to imbalances.” Now see the term equity being used in all kinds of places and in real estate it means something different. Now what are the chances people mix these two up? How can you validate data when the verification is bungled? It is the simple singular vision that Microsoft people seem to forget. It is mostly about the deadline and that is where verification stuffs up. 

Satya Nadella is about technology that understands us and here we get the first problem. When we consider that “specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.” As I see it, without verification you have no validation and you merely get a bucket of data where everything is collected and whatever the result of it becomes an automated mess, hence my objection to it. So as we are given “Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place”, we need to understand that the data doesn’t support it yet and to do this all data needs to be recollected and properly verified before we can even consider validating it. 

Then we get article 2 which I talked about a month ago the Wired article (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) we see the use of deeper machine learning where we are given ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’, yes a real brain bungle. Microsoft has a tool and criminals use it to get through cloud accounts. How is that helping anyone? The fact that Microsoft did not see this kink in their trains of thought and we are given “Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers” a simple approach of stopping the system from collecting and adhering to criminal minds. Whilst Windows Central gives us ‘A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”’ beside the horror statement “Microsoft is trying” we get the rather annoying setting of “we don’t know how to build secure AI applications”. And this isn’t some student. Michael Bargury is an industry expert in cybersecurity seems to be focused on cloud security. So what ‘expertise’ does Microsoft have to offer? People who were there 3 weeks ago were shown 15 ways to break copilot and it is all over their 365 applications. At this stage Microsoft wants to push out broken if not an unstable environment where your data resides. Is there a larger need to immediately switch to AWS? 

Then we get a two parter. In the first part we see (at https://www.crn.com.au/news/salesforces-benioff-says-microsoft-ai-has-disappointed-so-many-customers-611296) CRN giving us the view of Marc Benioff from Salesforce giving us ‘Microsoft AI ‘has disappointed so many customers’’ and that is not all. We are given ““Last quarter alone, we saw a customer increase of over 60 per cent, and daily users have more than doubled – a clear indicator of Copilot’s value in the market,” Spataro said.” Words from Jared Spataro, Microsoft’s corporate vice president. All about sales and revenue. So where is the security at? Where are the fixes at? So we are then given ““When I talk to chief information officers directly and if you look at recent third-party data, organisations are betting on Microsoft for their AI transformation.” Microsoft has more than 400,000 partners worldwide, according to the vendor.” And here we have a new part. When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue it is my speculated view that corners are cut on verification and validation (a little less on the second factor). And the second part in this comes from CX Today (at https://www.cxtoday.com/speech-analytics/microsoft-fires-back-rubbishes-benioffs-copilot-criticism/) where we are given ‘Microsoft Fires Back, Rubbishes Benioff’s Copilot Criticism’ with the text “Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, rebutted the Salesforce CEO’s comments, claiming that the company had been receiving favourable feedback from its Copilot customers.” At this point I want to add the thought “How was that data filtered?” You see the article also gives us “While Benioff can hardly be viewed as an objective voice, Inc. Magazine recently gave the solution a D – rating, claiming that it is “not generating significant revenue” for its customers – suggesting that the CEO may have a point” as well as “despite Microsoft’s protestations, there have been rumblings of dissatisfaction from Copilot users” when the dust settles, I wonder how Microsoft will fare. You see I state that AI does not (yet) exist. The truth is that generative AI can have a place. And when AI is here, when it is actually here not many can use it. The hardware is too expensive and the systems will need close to months of testing. These new systems that is a lot, it would take years for simple binary systems to catch up. As such these LLM deeper machine learning systems will have a place, but I have seen tech companies fire up sales people and get the cream of it, but the customers will need a new set of spectacles to see the real deal. The premise that I see is that these people merely look at the groups they want, but it tends to be not so filtered and as such garbage comes into these systems. And that is where we end up with unverified and unvalidated data points. And to give you an artistic view consider the following when we use a one point perspective that is set to “a drawing method that shows how things appear to get smaller as they get further away, converging towards a single “vanishing point” on the horizon line” So that drawing might have 250,000 points. Now consider that data is unvalidated. That system now gets 5,000 extra floating points. What happens when these points invade the model? What is left of your art work? Now consider that data sets like this have 15,000,000 data points and every data point has 1,000,000 parameters. See the mess you end up with? Now go look into any system and see how Microsoft verifies their data. I could not find any white papers on this. A simple customer care point of view, I have had that for decades and Jared Spataro as I see it seemingly does not have that. He did not grace his speech with the essential need of data verification before validation. That is a simple point of view and it is my view that Microsoft will come up short again and again. So as I (simplistically) see it. Is by any chance, Jared Spataro anything more than a user missing Microsoft value at present?

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science