Tag Archives: CX Today

Poised to deliver critique

That is my stance at present. It might be a wrong position to have, but it comes from a setting of several events that come together at this focal point. We all have it, we are all destined to a stage of negativity thought speculation or presumption. It is within all of us and my article 20 hours ago on Microsoft woke something up within me. So I will take you on a slightly bumpy ride.

The first step is seen through the BBC (at https://www.bbc.com/worklife/article/20240905-microsoft-ai-interview-bbc-executive-lounge) where we get ‘Microsoft is turning to AI to make its workplace more inclusive’ and we are given “It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.” With the added “Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI’s responses and capabilities.” Here we see the collective thoughts an presumptions I had all along. AI does not (yet) exist. How do you live with “Microsoft quickly announced a fix”? We can speculate whether the data was warped, it was not defined correctly. Or it is a more simple setting of programmer error. And when an AI is that incorrect does it have any reliability? Consider the old data view we had in the early 90’s “Garbage In, Garbage Out”. Then. We are offered “Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it’s putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself”, as such consider this “promote equity and representation – with the right safeguards” Is that the use of AI? Or is it the option of deeper machine learning using an LLM model? An AI with safeguards? Promote equity and representation? If the data is there, it might find reliable triggers if it knows where or what to look for. But the model needs to be taught and that is where data verification comes in, verified data leads to a validated model. As such to promote equity and presentation the dat needs to understand the two settings. Now we get the harder part “The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognising that we do not all start from the same place and must acknowledge and make adjustments to imbalances.” Now see the term equity being used in all kinds of places and in real estate it means something different. Now what are the chances people mix these two up? How can you validate data when the verification is bungled? It is the simple singular vision that Microsoft people seem to forget. It is mostly about the deadline and that is where verification stuffs up. 

Satya Nadella is about technology that understands us and here we get the first problem. When we consider that “specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.” As I see it, without verification you have no validation and you merely get a bucket of data where everything is collected and whatever the result of it becomes an automated mess, hence my objection to it. So as we are given “Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place”, we need to understand that the data doesn’t support it yet and to do this all data needs to be recollected and properly verified before we can even consider validating it. 

Then we get article 2 which I talked about a month ago the Wired article (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) we see the use of deeper machine learning where we are given ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’, yes a real brain bungle. Microsoft has a tool and criminals use it to get through cloud accounts. How is that helping anyone? The fact that Microsoft did not see this kink in their trains of thought and we are given “Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers” a simple approach of stopping the system from collecting and adhering to criminal minds. Whilst Windows Central gives us ‘A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”’ beside the horror statement “Microsoft is trying” we get the rather annoying setting of “we don’t know how to build secure AI applications”. And this isn’t some student. Michael Bargury is an industry expert in cybersecurity seems to be focused on cloud security. So what ‘expertise’ does Microsoft have to offer? People who were there 3 weeks ago were shown 15 ways to break copilot and it is all over their 365 applications. At this stage Microsoft wants to push out broken if not an unstable environment where your data resides. Is there a larger need to immediately switch to AWS? 

Then we get a two parter. In the first part we see (at https://www.crn.com.au/news/salesforces-benioff-says-microsoft-ai-has-disappointed-so-many-customers-611296) CRN giving us the view of Marc Benioff from Salesforce giving us ‘Microsoft AI ‘has disappointed so many customers’’ and that is not all. We are given ““Last quarter alone, we saw a customer increase of over 60 per cent, and daily users have more than doubled – a clear indicator of Copilot’s value in the market,” Spataro said.” Words from Jared Spataro, Microsoft’s corporate vice president. All about sales and revenue. So where is the security at? Where are the fixes at? So we are then given ““When I talk to chief information officers directly and if you look at recent third-party data, organisations are betting on Microsoft for their AI transformation.” Microsoft has more than 400,000 partners worldwide, according to the vendor.” And here we have a new part. When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue it is my speculated view that corners are cut on verification and validation (a little less on the second factor). And the second part in this comes from CX Today (at https://www.cxtoday.com/speech-analytics/microsoft-fires-back-rubbishes-benioffs-copilot-criticism/) where we are given ‘Microsoft Fires Back, Rubbishes Benioff’s Copilot Criticism’ with the text “Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, rebutted the Salesforce CEO’s comments, claiming that the company had been receiving favourable feedback from its Copilot customers.” At this point I want to add the thought “How was that data filtered?” You see the article also gives us “While Benioff can hardly be viewed as an objective voice, Inc. Magazine recently gave the solution a D – rating, claiming that it is “not generating significant revenue” for its customers – suggesting that the CEO may have a point” as well as “despite Microsoft’s protestations, there have been rumblings of dissatisfaction from Copilot users” when the dust settles, I wonder how Microsoft will fare. You see I state that AI does not (yet) exist. The truth is that generative AI can have a place. And when AI is here, when it is actually here not many can use it. The hardware is too expensive and the systems will need close to months of testing. These new systems that is a lot, it would take years for simple binary systems to catch up. As such these LLM deeper machine learning systems will have a place, but I have seen tech companies fire up sales people and get the cream of it, but the customers will need a new set of spectacles to see the real deal. The premise that I see is that these people merely look at the groups they want, but it tends to be not so filtered and as such garbage comes into these systems. And that is where we end up with unverified and unvalidated data points. And to give you an artistic view consider the following when we use a one point perspective that is set to “a drawing method that shows how things appear to get smaller as they get further away, converging towards a single “vanishing point” on the horizon line” So that drawing might have 250,000 points. Now consider that data is unvalidated. That system now gets 5,000 extra floating points. What happens when these points invade the model? What is left of your art work? Now consider that data sets like this have 15,000,000 data points and every data point has 1,000,000 parameters. See the mess you end up with? Now go look into any system and see how Microsoft verifies their data. I could not find any white papers on this. A simple customer care point of view, I have had that for decades and Jared Spataro as I see it seemingly does not have that. He did not grace his speech with the essential need of data verification before validation. That is a simple point of view and it is my view that Microsoft will come up short again and again. So as I (simplistically) see it. Is by any chance, Jared Spataro anything more than a user missing Microsoft value at present?

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science