Tag Archives: Michael Bargury

And there was more

You see three days ago (merely two days and change) I wrote ‘A story in two parts’ (at https://lawlordtobe.com/2025/01/17/a-story-in-two-parts/) where I laird bare a few of the ‘shortcomings’ of Microsoft. However there was more. I had initially chosen the title ‘The color is blue’ yet I decided that the premise is not about Azure, there is more to it all. You see Fierce Network gives us ‘Google Cloud could overtake Microsoft’s No. 2 cloud position this year’, which sounds nice. However there are a few issues with that. We will all love ““Google Cloud is already nearly equal to Microsoft Azure in revenues, and has a higher revenue growth rate than Microsoft Azure,” Gold wrote in a research note. “By the end of the next four years of revenue growth, we project Google Cloud’s revenues will be 55% greater than Azure at current growth rates.”” The research note gives the proper “Based on the Average of Past Two Years Revenue Growth Rate

Assuming Same Growth Rate Going Forward” so that is good, but it does not despair from “By the end of the next 4 years of revenue growth, we project Google Cloud’s revenues will be 55% greater than Azure at current growth rates.” Yet this setting does not account that someone at Microsoft ‘suddenly’ takes an innovative step towards (who knows), the second setting is that the technology premise stays where it is. Huawei with their HarmonyOS is another factor, the Chinese factor. In this I predict that they might use Microsoft down the line and might step away from Google (speculative). We have little insight in what places like the UAE does and they have a large investment in their approach to AI and in this Microsoft has the inner track there. So I love the premise, but I have thoughts of consideration on how the future unfolds. There is a chance that AWS will clear house, but there are reservations on that front too. 

Still, Azure has issues. You see the Register (at https://www.theregister.com/2025/01/13/azure_m365_outage/) gives us ‘Azure, Microsoft 365 MFA outage locks out users across regions’ with the added “Microsoft’s multi-factor authentication (MFA) for Azure and Microsoft 365 (M365) was offline for four hours during Monday’s busy start for European subscribers.” I understand that it comes with “It’s fixed, mostly, after Europeans had a manic Monday” now I wonder why we see the use of ‘mostly’ there are perhaps a few gaps in the solution and that happens, but how many of these events will Microsoft cater to until a user like Coca Cola gets a tap on the shoulder to start looking for alternatives? Do you think that a man like James Quincey keeps his sense of humor when his bottom line is under fire? And that is only the beginning.

Still Microsoft has its own ‘defense’ knee jerk operation, we are informed of that by Techi where we see (at https://www.techi.com/microsoft-files-suit-against-hundreds-abuse-azure-openai-services/) with the headline ‘Microsoft Files Suit Against Hundreds for Abuse of Azure OpenAI Services’, so not only is their OpenAI ‘flawed’, it is open to abuse (apparently). We are given “API Key Theft and Hacking-as-a-Service”where we see “As per Microsoft, the defendants systematically and through their deceitful acts stole API keys, the fundamental means of authentication to its AI services. The hacked accounts were allegedly pivotal in creating an act of “hacking-as-a-service” One main ingredient for that operation would be De3u, a software that enabled one to convert images synthesized by OpenAI’s DALL-E without the necessity of writing an actual code.” I kinda covered that on September 8th 2024 in ‘Poised to give critique’ (at https://lawlordtobe.com/2024/09/08/poised-to-deliver-critique/). Michael Bargury gave us a small example of how bad things can get.  Here the operational setting is given through “A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”” and here is the premise now consider what (under Torts) customers will do, for example Coca Cola. Do you think they go after the so called hacker with not enough money to afford his/her own place or Microsoft with access to several bank vaults? Take the fortune 500 clients with claims of transgressions, do you really think there will be even a penny left in those Microsoft vaults when their legal teams are done with them? It might not be fair on Microsoft, but the setting of the use of the term AI opens up a whole new can of worms.

Then the Business Times (at https://www.businesstimes.com.sg/companies-markets/microsoft-openai-partnership-raises-antitrust-concerns-ftc-says) gives us ‘Microsoft-OpenAI partnership raises antitrust concerns, FTC says’ in this I might actually be a bit on the side of Microsoft. They give us “MICROSOFT’S US$13 billion investment in OpenAI raises concerns that the tech giant could extend its dominance in cloud computing into the nascent artificial intelligence (AI) market, the Federal Trade Commission (FTC) said in a report released on Friday (Jan 17).” My issue here is that there is a setting we had in the past and in countries they created their version of the FTC. It was a power for good then, but there is now the setting that LLM’s and Deeper Machine Learning has grown to a scope that the FTC cannot really fathom. This IT solution goes beyond what they know or understand and all the tech companies face this. So either they grow their ‘programming with barricades’ side of it all, giving tech companies the flaws that the law imbued in whatever country it is based. And that for global companies will set a larger flawed premise. It is like parties are limited to what others have. As such all criminals will come to us with BB-guns, because that is what the police have. Does that sound realistic? I don’t think so. But this also falls straight into the premise that Fierce Networks gave us. It works out fine for Google, until Google gets barricaded I reckon. So this is a setting that the tech firms are set to whatever the wannabe’s can do, that is a direct strangling of commerce and innovation and it sets whomever develop the trigital computer system and if you think that these systems are fast now? The next level system develops with a trinary operating system running on that hardware will astound the world. As I see it should diminish the IBM Deep Blue to a simple calculator. The difference will be THAT much, so who will innovate that when the FTC strangles innovation?

And finally we get the CIO (at https://www.cio.com/article/3802745/microsoft-commits-to-ai-integration-but-delivers-no-particulars-to-differentiate-from-rivals.html) who gives us ‘Microsoft commits to AI integration, but delivers no particulars to differentiate from rivals’ and as I see it, it was already lagging too much against AWS, and now apparently Google is coming up fast and under these settings we get this headline? And the part that matters is given with “Analysts, however, agreed that the statement reflected no meaningful changes to Microsoft’s AI strategy. The bluntest assessment came from Ryan Brunet, a principal research director at the Info-Tech Research Group: “This is classic Microsoft. It’s very much the same old garbage.”” It reminded my towards an old premise from the late 80’s when the PC was exciting and new ‘Garbage in, Garbage out’ in the age when everyone considered themselves a Market Research executive and these wannabe’s had not even mastered the basic needs of data quality. It was a Gender versus Shoe size and they thought that the solution was add the Lambda test (I think it was Lambda). And I get it, Satya Nadella talks his own street side, the problem is that there are too many unknowns at present and he hopes to get all the others onboard before they have thoroughly selected their options and in light of the selected abuses, that setting is not a given, especially as Google seemingly doesn’t have these flaws (as far as I know neither does IBM or whatever AWS wields). 

A setting that was more and could set a lot of people in the liable column of choices. And some of this has been known for at least a quarter. When you add this with part one, you see why I predicted the downfall of Microsoft three years ago. And as I see it Microsoft walked to dotted line in a near perfect manner, too bad they never read the byline ‘this way to the crevice you will not avoid when getting too close’.

It is as some say ‘the way the cookie crumbles’. Darn still 4 hours until breakfast. Time to find a new story. Have a great Monday and if you cannot get into Azure today, feel free to investigate alternatives.

Leave a comment

Filed under Finance, IT, Law, Media, Science

Poised to deliver critique

That is my stance at present. It might be a wrong position to have, but it comes from a setting of several events that come together at this focal point. We all have it, we are all destined to a stage of negativity thought speculation or presumption. It is within all of us and my article 20 hours ago on Microsoft woke something up within me. So I will take you on a slightly bumpy ride.

The first step is seen through the BBC (at https://www.bbc.com/worklife/article/20240905-microsoft-ai-interview-bbc-executive-lounge) where we get ‘Microsoft is turning to AI to make its workplace more inclusive’ and we are given “It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.” With the added “Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI’s responses and capabilities.” Here we see the collective thoughts an presumptions I had all along. AI does not (yet) exist. How do you live with “Microsoft quickly announced a fix”? We can speculate whether the data was warped, it was not defined correctly. Or it is a more simple setting of programmer error. And when an AI is that incorrect does it have any reliability? Consider the old data view we had in the early 90’s “Garbage In, Garbage Out”. Then. We are offered “Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it’s putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself”, as such consider this “promote equity and representation – with the right safeguards” Is that the use of AI? Or is it the option of deeper machine learning using an LLM model? An AI with safeguards? Promote equity and representation? If the data is there, it might find reliable triggers if it knows where or what to look for. But the model needs to be taught and that is where data verification comes in, verified data leads to a validated model. As such to promote equity and presentation the dat needs to understand the two settings. Now we get the harder part “The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognising that we do not all start from the same place and must acknowledge and make adjustments to imbalances.” Now see the term equity being used in all kinds of places and in real estate it means something different. Now what are the chances people mix these two up? How can you validate data when the verification is bungled? It is the simple singular vision that Microsoft people seem to forget. It is mostly about the deadline and that is where verification stuffs up. 

Satya Nadella is about technology that understands us and here we get the first problem. When we consider that “specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.” As I see it, without verification you have no validation and you merely get a bucket of data where everything is collected and whatever the result of it becomes an automated mess, hence my objection to it. So as we are given “Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place”, we need to understand that the data doesn’t support it yet and to do this all data needs to be recollected and properly verified before we can even consider validating it. 

Then we get article 2 which I talked about a month ago the Wired article (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) we see the use of deeper machine learning where we are given ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’, yes a real brain bungle. Microsoft has a tool and criminals use it to get through cloud accounts. How is that helping anyone? The fact that Microsoft did not see this kink in their trains of thought and we are given “Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers” a simple approach of stopping the system from collecting and adhering to criminal minds. Whilst Windows Central gives us ‘A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”’ beside the horror statement “Microsoft is trying” we get the rather annoying setting of “we don’t know how to build secure AI applications”. And this isn’t some student. Michael Bargury is an industry expert in cybersecurity seems to be focused on cloud security. So what ‘expertise’ does Microsoft have to offer? People who were there 3 weeks ago were shown 15 ways to break copilot and it is all over their 365 applications. At this stage Microsoft wants to push out broken if not an unstable environment where your data resides. Is there a larger need to immediately switch to AWS? 

Then we get a two parter. In the first part we see (at https://www.crn.com.au/news/salesforces-benioff-says-microsoft-ai-has-disappointed-so-many-customers-611296) CRN giving us the view of Marc Benioff from Salesforce giving us ‘Microsoft AI ‘has disappointed so many customers’’ and that is not all. We are given ““Last quarter alone, we saw a customer increase of over 60 per cent, and daily users have more than doubled – a clear indicator of Copilot’s value in the market,” Spataro said.” Words from Jared Spataro, Microsoft’s corporate vice president. All about sales and revenue. So where is the security at? Where are the fixes at? So we are then given ““When I talk to chief information officers directly and if you look at recent third-party data, organisations are betting on Microsoft for their AI transformation.” Microsoft has more than 400,000 partners worldwide, according to the vendor.” And here we have a new part. When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue it is my speculated view that corners are cut on verification and validation (a little less on the second factor). And the second part in this comes from CX Today (at https://www.cxtoday.com/speech-analytics/microsoft-fires-back-rubbishes-benioffs-copilot-criticism/) where we are given ‘Microsoft Fires Back, Rubbishes Benioff’s Copilot Criticism’ with the text “Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, rebutted the Salesforce CEO’s comments, claiming that the company had been receiving favourable feedback from its Copilot customers.” At this point I want to add the thought “How was that data filtered?” You see the article also gives us “While Benioff can hardly be viewed as an objective voice, Inc. Magazine recently gave the solution a D – rating, claiming that it is “not generating significant revenue” for its customers – suggesting that the CEO may have a point” as well as “despite Microsoft’s protestations, there have been rumblings of dissatisfaction from Copilot users” when the dust settles, I wonder how Microsoft will fare. You see I state that AI does not (yet) exist. The truth is that generative AI can have a place. And when AI is here, when it is actually here not many can use it. The hardware is too expensive and the systems will need close to months of testing. These new systems that is a lot, it would take years for simple binary systems to catch up. As such these LLM deeper machine learning systems will have a place, but I have seen tech companies fire up sales people and get the cream of it, but the customers will need a new set of spectacles to see the real deal. The premise that I see is that these people merely look at the groups they want, but it tends to be not so filtered and as such garbage comes into these systems. And that is where we end up with unverified and unvalidated data points. And to give you an artistic view consider the following when we use a one point perspective that is set to “a drawing method that shows how things appear to get smaller as they get further away, converging towards a single “vanishing point” on the horizon line” So that drawing might have 250,000 points. Now consider that data is unvalidated. That system now gets 5,000 extra floating points. What happens when these points invade the model? What is left of your art work? Now consider that data sets like this have 15,000,000 data points and every data point has 1,000,000 parameters. See the mess you end up with? Now go look into any system and see how Microsoft verifies their data. I could not find any white papers on this. A simple customer care point of view, I have had that for decades and Jared Spataro as I see it seemingly does not have that. He did not grace his speech with the essential need of data verification before validation. That is a simple point of view and it is my view that Microsoft will come up short again and again. So as I (simplistically) see it. Is by any chance, Jared Spataro anything more than a user missing Microsoft value at present?

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science

Setting of the day

On a good day
The Khaleej Times Jost informed me on how a good day comes to pass. Here (at https://www.khaleejtimes.com/uae/meet-the-uae-police-officer-who-uncovered-183-money-laundering-cases-in-15-years) we are introduced to Major Saad Ahmed Al Marzooqi. 

The headline ‘Meet the UAE police officer who uncovered 183 money laundering cases in 15 years’. We are also given “He was recently appointed as the first Emirati member of the Financial Action Task Force’s (FATF) International Cooperation Review Team” and we can be mesmerised, or brag about his abilities, but the numbers imply that he slightly uncovered more than one case a month. There are plenty of police forces all over the world where half of these numbers would imply a stellar career. As we gawk over “exposed 183 money laundering cases that are related to drugs and financial embezzlement. He had also created a database of incidents, which contributed to an increase in convictions from a monthly average of 3 to 14” we need to realise that the increase of 3 to 14 implies that this one person achieved more than any average police station in Europe. 

This is the kind of man the world needs and that will be explained in the next article, because the universe relies on balance and the imbalance we are about to see takes the cake and changes an optional day to night.

On a bad day
Yes like any hero that needs a antagonist to make things interesting, we have Microsoft in two mentions. Now this isn’t directly involving anyone at Microsoft, but the follies are a setting that makes things a lot worse.

First we get Wired (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) who gives us ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’ we get to see “Attacks on Microsoft’s Copilot AI allow for answers to be manipulated, data extracted, and security protections bypassed, new research shows” which is not good, but anything positive can me mauled into a criminal jester for organised crime. The additional “Microsoft raced to put generative AI at the heart of its systems. Ask a question about an upcoming meeting and the company’s Copilot AI system can pull answers from your emails, Teams chats, and files—a potential productivity boon. But these exact processes can also be abused by hackers.

Today at the Black Hat security conference in Las Vegas, researcher Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers, including using it to provide false references to files, exfiltrate some private data, and dodge Microsoft’s security protections.” Now, I haven’t seen this, but Wired has a solid enough level of credibility to not ignore this. And that isn’t all. Bargury gives the world “the ability to turn the AI into an automatic spear-phishing machine. Dubbed LOLCopilot, the red-teaming code Bargury created can—crucially, once a hacker has access to someone’s work email” as I speculatively see it a mediocrity solution to turn the Internet of Things into a machine serving organised crime, optionally the NSA too, well done Microsoft. As I see it, the workload of Major Al Marzooqi would increase fivefold when this hits the open world, actually it already has if I understood the words from Michael Bargury correctly. In this, we optionally an even bigger problem, or at least a lot of corporations will.

You see there is a second message, in this case from Cyber Security News (at https://cybersecuritynews.com/microsoft-entra-id-vulnerability/). They give us ‘Microsoft Entra ID (Azure AD) Vulnerability Let Attackers Gain Global Admin Access’ with the subtext “Security researchers have uncovered vulnerabilities in Microsoft’s Entra ID (formerly Azure Active Directory) dubbed “UnOAuthorized” which could allow unauthorised actions beyond expected controls” Now take these two parts together and the phishing expedition could hit every R&D system on the planet using Azure. I am certain that Microsoft will have some patch coming soon, but in the meantime the bulk of R&D (under Azure) will be vulnerable and approachable by many hacker and especially organised crime, because selling secrets to competitors tends to be a lucrative setting and most corporations aren’t that finicky in acquiring something that raises (and assures) the bonuses of the members of their boardroom. OK, this is speculative on my side, but wonder what some will do to get the upper hand in business, especially if there is a bonus raise involved. 

I wish I had a solution, but my personal feeling is that Microsoft has too many holes, loops and a whole rage of other issues and switching to either AWS, IBM cloud or Google Cloud tends to be an essential first step coming to my mind. Now, if there are sceptics who think that I am anti-Microsoft here, they are probably right. Therefor the Links to the two articles were added letting you look at the stories yourself. In the meantime I remember a story in April and it should be my ‘duty’ to inform SAMI that ‘BAE Systems and Microsoft join forces to equip defence programmes with innovative cloud technology’ had a nice article and with the two articles mentioned, SAMI could lay its hands on a truckload of BAE IP. Not sure how far they will get, but free IP is the way to go I say. So when you realise that a large corporation like British Aerospace with all the civilian and military hardware can be accessed, what chances do you think that Novo Nordisk (Denmark), LVMH (France), ASML (Netherlands), SAP (Germany), Hermez (France), L’Oreal (France) have? I do not know if any uses Azure, but it is a good moment for them to select one of the other companies. They could after the event sue Microsoft for damages, but Delta Airlines is already suing CrowdStrike and I am not sure how that will go. In the end it is my personal opinion that this could potentially bite Microsoft hard and it is one of the reasons I do not let them near my IP.

As I personally see it, the companies racing the be the first to launch their (fake) AI will now have a much larger impact. There were already fake data issues, but now the phishing options that are mentioned and when that gets linked to what Cyber Security News calls “UnOAuthorized” the entire IT game changes dramatically and I have no idea how that will play out. 

As my Sunday is almost over and Vancouver only just started there’s a chance we postulate that the next 72 hours will be an interesting one. Have a lovely day (when you are not on Azure).

1 Comment

Filed under Finance, IT, Law, Military, Science