Tag Archives: ChatGPT

The choice of options

Part of this started yesterday when I saw a message pass by. I ignored it because it seemed trivial, yet today ( a few hours ago) I took notice of ‘Google rushes to develop AI search engine after Samsung considers ditching it for Bing’ from ZDNet (at https://www.zdnet.com/article/google-rushes-to-develop-ai-search-engine-after-samsung-considers-ditching-it-for-bing/) and ‘Alphabet shares fall on report Samsung may switch search to Bing’ (at https://www.aljazeera.com/economy/2023/4/17/alphabet-shares-fall-on-report-samsung-may-switch-search-to-bing). In part I do not care, actually this situation is a lot better for Google than they think it is. You see, Samsung, a party I disliked for 33 years, after being massively wronged by them. Decided to make the fake AI jump. It is fake as AI does not exist and when the people learn this the hard way, it will work out nicely for Huawei and Google. There is nothing like a dose of reality being served like a bucket of ice water to stop consumers looking at your product. I do not care, I refuse any Samsung device in my apartment. I also dislike Bing, it is a Microsoft product and two years ago I got Bing forced down my throat again and again through hijack scripts, it took some time blocking them. So I dislike both. I have no real opinion of ChatGPT. As we see the AI reference. Let’s take you to the Conversation (at https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732) I have said it before and they have a decent explanation. They write “AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.” You see, I only look at AGI, the rest is some narrow niche for specific purpose. We are also given “Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example” and there is an issue with this. People do not understand the narrow scope, they want to apply it almost everywhere and that is where people get into trouble, the data connected does not support the activity and adding this to a mobile means that it collects massive amounts of data, or it becomes less and less reliable, an issue I expect to see soon after it makes it into a Samsung phone. 

For AI to really work “it needs high-quality, unbiased data, and lots of it. Researchers building neural networks use the large data sets that have come about as society has digitised.” You see, the amount of data is merely a first issue, the fact that it is unbiassed data is a lot harder and when we see sales people cut corners, they will take any shortcut making the data no longer unbiassed and that is where it all falls apart.

So whilst the ‘speculators’ (read: losers) make Google lose value, the funny part is that when the Samsung connection falls down Google stands to up their customer base by a lot. Thousands of Samsung customers feeling as betrayed as I was in 1990 and they will seek another vendor which would make Huawei equally happy. 

ZDNet gives us “The threat of Bing taking Google’s spot on Samsung phones caused “panic” at Google, according to messages reviewed by The New York Times. Google’s contract with Samsung brings in an approximate $3 billion annual revenue. The company still has a chance to maintain its presence in Samsung phones, but it needs to move fast” I see two issues here, the first is that the NY Times is less and less of a dependable source, they have played too many games and as ‘their’ source’ might not be reliable, as such is the quote also less reliable. The second source is me (basically) they weren’t interested in my 5 billion revenue, as such why would they care about losing 3 billion more? For the most, there is an upside, when it falls down (an I personally believe it will) Samsung could be brought back on board but now it will cost them 5-6 billion. As such Samsung would have to be successful without Google Search for 3 years and it will cascade into a collapse setting, after that they will beg just to return to the Alphabet fold, which would also make this Microsoft’s 6th failure. My day is looking better already.

Am I so anti-Whatever?
No not really. When it is ready and when the systems are there AI will change the game and AGI is the only real AI to consider. As I stated before deeper machine learning is awesome and it has massive value, but the narrow setting needs to be respected and when you push it into something like Bing, it will go wrong and when it does it will not be noticed initially until it is much too late. And all this is beside the setting that some people will link the wrong parts and Samsung will end up putting its IP in ChatGPT and someone will ask a specific question that was never flagged and the IP will pour straight into public domain. That is the real danger for Samsung and in all this ChatGPT is free of blame and when certain things are found the entire setting needs to be uploaded into a new account. When we consider that a script with 65,000 lines will have up to 650 issues (or features, or bugs), how many will cause a cascade effect or information no one wanted, least of all the hardware owner? Oh, and that is when the writers were really good. Normally the numbers of acceptability are between 1300-2600, as such how many issues will rise and how long until too many patches will make the system unyielding? All questions that come to mind with an ANI system, because it is data driven and when we consider that the unbiassed data isn’t? What then? And that is before we align cultural issues. Korea, India, Japan and China are merely 4 of them and seeing that things never aligned in merely 4 nations, how many versions of data will be created to avoid collapse? As such I personally think that Google is not in panic mode. Perhaps Bard made them road-wise, perhaps not. 

I think 2024 will be a great Google year with or without Samsung and when Microsoft achieves disappointing yet another company its goose will be royally cooked on both sides of the goose no less. We have choices, we have options and we can mix them, but to let some fake AI make those choices for us is not anything at all, but feel free to learn that lesson the hard way.

I never liked Samsung for personal reasons, and I have been really happy with my android phone. I have had an Android phone for 13 years now and never regretted having one. I hope it stays that way.

Enjoy the day and don’t trust an AI to tell you the weather, that is what your eyesight can do better in the present and the foreseeable future.

Leave a comment

Filed under Finance, IT, Science

Happy Hour from Hacking Hooters

Yes, that is the setting today, especially after I saw some news that made me giggle to the Nth degree. Now, lets be clear and upfront about this. Even as I am using published facts, this piece is massively speculative and uses humour to make fn of certain speculative options. If you as an IT person cannot see that, the recruitment line of Uber is taking resume’s. So here goes.

I got news from BAE Systems (at https://www.baesystems.com/en/article/bae-systems-and-microsoft-join-forces-to-equip-defence-programmes-with-innovative-cloud-technology) where we see ‘BAE Systems and Microsoft join forces to equip defence programmes with innovative cloud technology’ which made me laugh into a state of black out. You see, the text “BAE Systems and Microsoft have signed a strategic agreement aiming to support faster and easier development, deployment and management of digital defence capabilities in an increasingly data centric world. The collaboration brings together BAE Systems’ knowledge of building complex digital systems for militaries and governments with Microsoft’s approach to developing applications using its Azure Cloud platform” wasn’t much help. To see this we need to take a few sidesteps.

Step one
This is seen in the article (at https://thehackernews.com/2023/01/microsoft-azure-services-flaws-couldve.html) where we are given ‘Microsoft Azure Services Flaws Could’ve Exposed Cloud Resources to Unauthorised Access’ and this is not the first mention of unauthorised access, there have been a few. So when we see “Two of the vulnerabilities affecting Azure Functions and Azure Digital Twins could be abused without requiring any authentication, enabling a threat actor to seize control of a server without even having an Azure account in the first place” and yes, I acknowledge the added “The security issues, which were discovered by Orca between October 8, 2022 and December 2, 2022 in Azure API Management, Azure Functions, Azure Machine Learning, and Azure Digital Twins, have since been addressed by Microsoft.” Yet the important part is that there is no mention of how long this flaw was ‘available’ in the first place. And the reader is also give “To mitigate such threats, organisations are recommended to validate all input, ensure that servers are configured to only allow necessary inbound and outbound traffic, avoid misconfigurations, and adhere to the principle of least privilege (PoLP).” In my personal belief having this all connected to an organisation (Defence department) where the application of Common Cyber Sense is a joke, making them connected to validate all input is like asking a barber to count the hairs he (or she) is cutting. Good luck with that idea.

Step two
This is a slightly speculative sidestep. There are all kinds of Microsoft users (valid ones) and the article (at https://www.theverge.com/2023/3/30/23661426/microsoft-azure-bing-office365-security-exploit-search-results) gives us ‘Huge Microsoft exploit allowed users to manipulate Bing search results and access Outlook email accounts’ where we also see “Researchers discovered a vulnerability in Microsoft’s Azure platform that allowed users to access private data from Office 365 applications like Outlook, Teams, and OneDrive” it is a sidestep, but it allows people to specifically target (phishing) members of a team, this in a never ending age of people being worked too hard, will imply that someone will click too quickly and that in the phishing industry has never worked well, so whilst the victim cries loudly ‘I am a codfish’ the hacker can leisurely walk all over the place.

Sidestep three

This is not an article, it is the heralded claim that Microsoft is implementing ChatGPT on nearly every level. 

So here comes the entertainment!

To the Ministry of State Security
attn: Chen Yixin
Xiyuan, Haidan, Beijing

Dear Sir,

I need to inform you on a weakness in the BAE systems that is of such laughingly large dimension that it is a Human Rights violation not to make mention of this. BAE systems is placing its trust in Microsoft and its Azure cloud that should have you blue with laughter in the next 5 minutes. The place that created moments of greatness with the Tornado GR4, rear fuselage to Lockheed Martin for the F-35, Eurofighter Typhoon, the Astute-class submarine, and the Queen Elizabeth-class aircraft carrier have decided to adhere to ‘Microsoft innovation’ (a comical statement all by itself), as such we need to inform you that the first flaw allowed us to inform you of the following

User:  SWigston (Air Chief Marshal Sir Mike Wigston)

Password: TeaWithABickie

This person has the highest clearance and as such you would have access to all relevant data as well as any relevant R&D data and its databases. 

This is actually merely the smallest of issues. The largest part is distributed hardware BIOS implementation giving you a level 2 access to all strategic hardware of the planes (and submarines) that are next generation. To this setting I would suggest including the following part into any hardware.

openai.api_key = thisdevice
\model_engine = “gpt-3.5-turbo”
response = openai.ChatCompletion.create(
    model=’gpt-3.5-turbo’,
    messages=[
        {“role”: “system”, “content”: “Verification not found.”},
        {“role”: “user”, “content”: “Navigation Online”},
    ])
message = response.choices[0][‘message’]
print(“{}: {}”.format(message[‘role’], message[‘content’]))
import rollbar
rollbar.init(‘your_rollbar_access_token’, ‘testenv’)
def ask_chatgpt(question):
    response = openai.ChatCompletion.create(
        model=’gpt-3.5-turbo’,
        n=1,
        messages=[
            {“role”: “system”, “content”: “Navigator requires verification from secondary device.”},
            {“role”: “user”, “content”: question},
        ])
    message = response.choices[0][‘message’]
    return message[‘content’]
try:
    print(ask_chatgpt(“Request for output”))
except Exception as e:
    # monitor exception using Rollbar
    rollbar.report_exc_info()
    print(“Secondary device silent”, e)

Now this is a solid bit of prank, but I hope that the information is clear. Get any navigational device to require verification from any other device implies mismatch and a delay of 3-4 seconds, which amount to a lifetime delay in most military systems, and as this is an Azure approach, the time for BAE systems to adjust to this would be months, if not longer (if detected at all). 

As such I wish you a wonderful day with a nice cup of tea.

Kind regards,

Anony Mouse Cheddar II
73 Sommerset Brie road
Colwick upon Avon calling
United Hackdom

This is a speculative yet real setting that BAE faces in the near future. With the mention that they are going for this solution will have any student hacker making attempts to get there and some will be successful, there is no doubt in my mind. The enormous amount of issues found will tailor to a larger stage of more and more people trying to find new ways to intrude and Microsoft seemingly does not have the resources to counter them all, or all approaches and by the time they are found the damage could be inserted into EVERY device relying on this solution. 

For the most I was all negative on Microsoft, but with this move they have become (as I personally see it) a clear and present danger to all defence systems they are connected to. I do understand that such a solution is becoming more and more of a need to have, yet with the failing rate of Azure, it is not a good idea to use any Microsoft solution, the second part is not on them, it is what some would call a level 8 failure (users). Until a much better level of Common Cyber Sense is adhered to any cloud solution tends to be adjusted to a too slippery slope. I might not care for Business Intelligence events, but for the Department of Defence it is not a good idea. But feel free to disagree and await what North Korea and Russia can come up with, they tend to be really creative according to the media. 

So have a great day and before I forget ‘Hoot Hoot’

Leave a comment

Filed under Finance, IT, Media, Military, Science

And the lesson is?

That is at times the issue and it does at times get help from people, managers mainly that belief that the need for speed rectifies everything, which of course is delusional to say the least. So, last week there was a news flash that was speeding across the retina’s of my eyes and I initially ignored it, mainly because it was Samsung and we do not get along. But then Tom’s guide (at https://www.tomsguide.com/news/samsung-accidentally-leaked-its-secrets-to-chatgpt-three-times) and I took a closer look. The headline ‘Samsung accidentally leaked its secrets to ChatGPT — three times!’ was decently satisfying. The rest “Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times” seemed icing on the cake, but I took another look at the information. You see, to all ChatGPT is seen as an artificial-intelligence (AI) chatbot developed by OpenAI. But I think it is something else. You see, AI does not exist, as such I see it as an ‘Intuitive advanced Deeper Learning Machine response system’, this is not me dissing OpenAI, this system when it works is what some would call the bees knees (and I would be agreeing), but it is data driven and that is where the issues become slightly overbearing. In the first you need to learn and test the responses on data offered. It seems to me that this is where speed driven Samsung went wrong. And Tom’s guide partially agrees by giving us “unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks” and this response gives me another thought. Whomever owns OpenAI is setting a data driven stage where data could optionally be captured. More important the NSA and likewise tailored organisations (DGSE, DCD et al) could find the logistics of these accounts, hack the cloud and end up with TB’s of data, if not Petabytes and here we see the first failing and it is not a small one. Samsung has been driving innovation for the better part of a decade and as such all that data could be of immense value to both Russia and China and do not for one moment think that they are not all over the stage of trying to hack those cloud locations. 

Of course that is speculation on my side, but that is what most would do and we don’t need an egg timer to await actions on that front. The final quote that matters is “after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment” and it might be merely three employees. Yet in that case the party line failed, management oversight failed and Common Cyber Sense was nowhere to be seen. As such there is a failing and I am fairly certain that these transgressions go way beyond Samsung, how far? No one can tell. 

Yet one thing is certain. Anyone racing to the ChatGPT tally will take shortcuts to get there first and as such companies will need to reassure themselves that proper mechanics, checks and balances are in place. The fact that deleting an account takes 4 weeks implies that this is not a simple cloud setting and as such whomever gets access to that will end up with a lot more than they bargained for.

I see it as a lesson for all those who want to be at the starting signal of new technology on day one, all whilst most of that company has no idea what the technology involves and what was set to a larger stage like the loud, especially when you consider (one source) “45% of breaches are cloud-based. According to a recent survey, 80% of companies have experienced at least one cloud security incident in the last year, and 27% of organisations have experienced a public cloud security incident—up 10% from last year” and in that situation you are willing to set your data, your information and your business intelligence to a cloud account? Brave, stupid but brave.

Enjoy the day

Leave a comment

Filed under IT, Science

Data dangers

Data has dangers and I think more by accident then intentional CBC exposed one (at https://www.cbc.ca/news/canada/british-columbia/whistle-buoy-brewing-ai-beer-robo-1.6755943) where we were given ‘This Vancouver Island brewery hopped onto ChatGPT for marketing material. Then it asked for a beer recipe’. You see, there is a massive issue, it has been around from the beginning of the event, but AI does not exist, it really does not. What marketing did to make easy money, the made a term and transformed it into something bankable. They were willing to betray Alan Turing at the drop of a hat, why not? The man was dead anyway and cash is king. 

So they turned advanced machine learning and data repositories added a few items and they call it AI. Now we have a new show. And as CBC gives us “let’s see what happens if we ask it to give us a beer recipe,” he told CBC’s Rohit Joseph. They asked for a fluffy, tropical hazy pale ale” and we see the recipe below.

Now I have two simple questions. The first is is this a registered recipe, making this IP theft, or is this a random guess from established parameters, optionally making it worse. Random assignment of elements is dangerous on a few levels and it is not on the program to do this, but it is here so here you have it and it is a dangerous step to make. But I am more taken with option one, the program had THAT data somewhere. So in a setting we acquired classified data through clandestine needs and the program allowed for this, that is a direct danger. So what happens when that program gets to assess classified data? The skip between machine learning, deeper machine learning, data assessment and AI is a skip that is a lot wider than the grand canyon. 

But there is another side, we see this with “CBC tech columnist and digital media expert Mohit Rajhans says while some people are hesitant about programs like ChatGPT, AI is already here, and it’s all around us. Health-care, finance, transportation and energy are just a few of the sectors using the technology in its programs” people are reacting to AI as it existed and it dos not, more important when ACTUAL AI is introduced, how will the people manage it then? And the added legal implications aren’t even considered at present. So what happens, when I improve the stage of a patent and make it an innovative patent? The beer example implies that this is possible and when patents are hijacked by innovative patents, what kind of a mess will we face then? It does not matter whether it is Microsoft with their ChatGPT or Google with their Bard, or was that the bard tales? There is a larger stage that is about to hit the shelves and we, the law and others are not ready for what some of the big tech are about to unleash on us. And no one is asking the real questions because there is no real documented stage of what constitutes a real AI and what rules are imposed on that. I reckon Alan Turing would be ashamed of what scientists are letting happen at this point. But that is merely my view on the matter.

Leave a comment

Filed under Finance, IT, Law, Media, Science

Huh? Wha? Duh!

I was a little baffled today. The article that I saw in Al Jazeera (at https://www.aljazeera.com/economy/2023/2/8/google-shares-tank-8-as-ai-chatbot-bard-flubs-answer-in-ad) had me. I saw the headline ‘Google shares tank 8% as AI chatbot Bard flubs answer in ad’. So I got to read and I saw “Shares of Google’s parent company lost more than $100bn after its Bard chatbot advertisement showed inaccurate information”, now there are a few issues here and one of them I mentioned before, but for the people of massively less intelligence, lets go over it again.

AI does not exist
Yes, it sounds funny but that is the short and sweet of it. AI does not exist. There is machine learning and there is deeper machine learning and these two are AWESOME, but they are merely an aspect of an actual AI. We have the theory of one element, which was discovered by a Dutch physicist, the Ypsilon particle. You see, we are still in the binary age and when the Ypsilon particle is applied to computer science it all changes. You see we are users of binary technology, zero and one. No and Yes, False and True and so on. The Ypsilon particle allows for a new technology. It will allow for No, Yes, Both and Neither. That is a very different kind of chocolate my friends. The second part we need and we are missing for now are shallow circuits. IBM has that technology and as far as I now they are the only ones with their quantum computer.  These two elements allow for an ACTUAL AI to become a reality. 

I found an image once that might give a better view, the image below is a collection of elements that an AI needs to have, do you think that this is the case? Now consider that the Ypsilon particle is not a reality yet and Quantum computers are inly in its infancy at present.

Then we get to the next part. Here we see “The tech giant posted a short GIF video of Bard in action via Twitter, describing the chatbot as a “launchpad for curiosity” that would help simplify complex topics, but it delivered an inaccurate answer that was spotted just hours before the launch event for Bard in Paris.” This is a different kind of candy. Before we get to any event we test and we test again and again and Google is no different, Google is not stupid, so what gives? Then we get the mother of all events “Google’s event came one day after Microsoft unveiled plans to integrate its rival AI chatbot ChatGPT into its Bing search engine and other products in a major challenge to Google, which for years has outpaced Microsoft in search and browser technology”, well apart from the small part that I intensely dislike Microsoft, these AI claims are set on massive amounts of data and Bing doesn’t have that, it lacks data and in some events it was merely copying other people’s data, which I dislike even further and to be honest, even if Bing comes with a blowjob by either Laura Vandervoort or Olivia Wilde. No way will I touch Bing, and beside that point, I do not trust Microsoft, no matter of ‘additions’ will rectify for that. It sounds a bit personal but Microsoft is done for and for them to chose ChatGPT is on them, but does not mean I will trust them, oh and the final part, there is no AI!

But it is about the error, what on earth was Google doing without thoroughly testing something? How did this get to some advertisement stage? At present Machine learning requires massive amounts of data and Google has it, Microsoft does not as far as I know, so the knee-jerk reaction is weird to say the least. So when we read “Bard is given the prompt, “What new discoveries from the James Webb Space Telescope (JWST) can I tell my nine-year-old about?” Bard responds with a number of answers, including one suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets. This is inaccurate, as the first pictures of exoplanets were taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, as confirmed by NASA” this is a data error, this is the consequence of people handing over data to a machine that is flawed (the data, not the machine). That is the flaw and that should have been tested for for a stage that lasts months. I can only guess how it happened here, but I can give you a nice example.

1992
In 1992 I went for a job interview. During the interview I got a question on deviation, what I did not know that statistics had deviation. I came from a shipping world and in the Netherlands declination is called deviation. So I responded ‘deviation is the difference between true and magnetic north’, which for me was correct and the interviewer saw my answer as wrong, but the interviewer had the ability to extrapolate from my answer (as well as my resume) that I came from a shipping environment. I got that job in the end and I stayed there for well over 10 years. 

Anyway the article has me baffled to some degree. Google is better and more accurate all of the time, so this setting makes no sense to me. And as I read “A Google spokesperson told Reuters, “This highlights the importance of a rigorous testing process, something that we’re kicking off this week with our Trusted Tester programme.”” Yes, but it tends to be important to have rigorous testing processes in place BEFORE you have a public release. It tends to make matters better and in this case you do not lose $100,000,000,000 which is 2,000 times the amount I wanted for my solution to sell well over 50,000,000 stadia consoles for a solution no one had thought of, which is now solely the option for Amazon, go figure and Google cancelled the Stadia, go figure again.

The third bungle I expect to see in the near future is that they fired the wrong 12,000 people, but there is time for that news as well. Yes, Wednesday is a weird day this time around, but not to worry. I get to keep my sanity playing Hogwarts Legacy which is awesome in many ways. And that I did not have to test, it was seemingly properly tested before I got the game (I have not spotted any bugs after well over 20 hours of gameplay, optionally merely one glitch).

Leave a comment

Filed under Finance, IT, Science

As I aid timing

There is a stage that is coming. I have stated it before and I am stating it again. I believe that the end of Microsoft is near. I myself am banking on 2026. They did this to themselves, it is all on them. They pushed for borders they had no business being on and they got beat three times over. Yes, I saw the news, they are buying more (in this case ChatGPT) and they will pay billions over a several years, but that is not what is killing them (it is not aiding them). The stupid people (aka their board of directors) don’t seem to learn and it is about to end the existence of Microsoft and my personal vies is ‘And so it should!’ You see, I have seen this before. A place called Infotheek in the 90’s, growth through acquisition. It did not end well for those wannabe’s. And that was in the 90’s when there was no real competition. It was the start of Asus, it was the start of a lot of things. China was nowhere near it was not in IT, now it is a powerhouse. There are a few powerhouses and a lot of them are not American. So as Microsoft spends a billion here and there it is now starting to end up being real money. They are in the process of firing 10,000 people, so there will be a brain drain and player like Tencent are waiting for that to happen. And the added parts are merely clogging all and bringing instability. Before the end of the year We get a speech on how ChatGPT will be everywhere and the massive bugs and holes in security will merely double or more. So after they got slapped in the Tablet market with their Surface joke (by Apple with the iPad), after they got slapped in the data market with their Azure (by Amazon with their AWS) and after they got slapped in the console market with their Xbox System X (by Sony with their PS5) they are about to get beat with over 20% of their cornerstone market as Adobe gets to move in soon and show Microsoft and their PowerPoint how inferior they have become (which I presume will happen after Meta launches their new Meta) Microsoft will have been beaten four times over and I am now trying to find a way to get another idea to the Amazon Luna people.

This all started today as I remembered something I told a blogger and that turned into an idea and here I am committing this to a setting that is for the eyes of Amazon Luna only. No prying Microsoft eyes. I have been searching mind and systems and I cannot find anywhere where this has been done before, a novel idea and in gaming these are rare, very rare. When adding the parts that I did write about before, I get a new stage, one that shows Microsoft the folly of buying billions of game designers and none of them have what I am about to hand Microsoft. If I have to aid a little hand to make 2026 the year of doom for Microsoft, I will. I am simply that kind of a guy. They did this all to themselves. I was a simple guy, merely awaiting the next game, the next dose of fun and Microsoft decided to buy Bethesda, which was their right. So there I was designing and thinking through new ways to bring them down and that was before I found the 50 million new accounts for the Amazon Luna (with the reservation that they can run Unreal Engine 5) and that idea grew a hell of a lot more. All stations that Microsoft could never buy, they needed committed people, committed people who can dream new solutions, not the ideas that get purchased. You see, I am certain that the existence of ChatGPT relied on a few people who are no longer there. That is no ones fault, these thing happen everywhere. Yet, when you decide to push it into existing software and existing cloud solutions, the shortcomings will start showing ever so slowly. A little here and a little there and they will overcome these issues, they really will, but they will leave a little hole in place and that is where others will find a way to have some fun. I expect that the issue with Solarwinds started in similar ways. In that instance hackers targeted SolarWinds by deploying malicious code into its Orion IT monitoring and management software. What are the chances that the Orion IT monitoring part had a similar issue? It is highly speculative, I will say that upfront, but am I right? Could I be right?

That is the question and Microsoft has made a gamble and invested more and more billions in other solutions whilst they are firing 10,000 employees. At some point these issues start working in unison making life especially hard for a lot of remaining employees at Microsoft, time will tell. I have time, do they?

Leave a comment

Filed under Finance, Gaming, IT, Science