Tag Archives: ChatGPT

Where should we look?

That is at times the issue, I would add to this “especially when we consider corporations the size of Microsoft” but this is nothing directly on Microsoft (I emphasize this as I have been dead set against some ‘issues’ Microsoft dealt us to). This is different and I have two articles that (to some aspect) overlap, but they are not the same and overlap should be subjectively seen.

The first one is BBC (at https://www.bbc.com/news/articles/c4gdnz1nlgyo) where we see ‘Microsoft servers hacked by Chinese groups, says tech giant’ where the first thought that overwhelmed me was “Didn’t you get Azure support arranged through China?” But that is in the back of my mind. We are given “Chinese “threat actors” have hacked some Microsoft SharePoint servers and targeted the data of the businesses using them, the firm has said. China state-backed Linen Typhoon and Violet Typhoon as well as China-based Storm-2603 were said to have “exploited vulnerabilities” in on-premises SharePoint servers, the kind used by firms, but not in its cloud-based service.” I am wondering about the quote “not in its cloud-based service” I have questions, but I am not doubting the quote. To doubt it, one needs to have in-depth knowledge and be deeply versed in Azure and I am not one of these people. As I personally see it, if one is transgressed upon, the opportunity rises to ‘infect’ both, but that might be my wrong look on this. So as we are given ““China firmly opposes and combats all forms of cyber attacks and cyber crime,” China’s US embassy spokesman said in a statement. “At the same time, we also firmly oppose smearing others without solid evidence,” continued Liu Pengyu in the statement posted on X. Microsoft said it had “high confidence” the hackers would continue to target systems which have not installed its security updates.” This makes me think about the UN/USA attack on Saudi Arabia regarding that columnist no one cares about, giving us the ‘high confidence’ from the CIA. It sounds like the start of a smear campaign. If you have evidence, present the evidence. If not, be quiet (to some extent). 

We then get someone who knows what he in talking about “Charles Carmakal, chief technology officer at Mandiant Consulting firm, a division of Google Cloud, told BBC News it was “aware of several victims in several different sectors across a number of global geographies”. Carmakal said it appeared that governments and businesses that use SharePoint on their sites were the primary target.” This is where I got to thinking, what is the problem with Sharepoint? And when we consider  the quote “Microsoft said Linen Typhoon had “focused on stealing intellectual property, primarily targeting organizations related to government, defence, strategic planning, and human rights” for 13 years. It added that Violet Typhoon had been “dedicated to espionage”, primarily targeting former government and military staff, non-governmental organizations, think tanks, higher education, the media, the financial sector and the health sector in the US, Europe, and East Asia.

It sounds ‘nice’ but it flows towards the thoughts like “related to government, defence, strategic planning, and human rights” for 13 years”, so were was the diligence to preventing issues with Sharepoint and cyber crime prevention? So consider that we are given “SharePoint hosts OneDrive for Business, which allows storage and synchronization of an individual’s personal work documents, as well as public/private file sharing of those documents.” That quote alone should have driven the need for much higher Cyberchecks. And perhaps they were done, but as I see it, it has been an unsuccessful result. It made me (perhaps incorrectly) think so many programs covering Desktops, Laptops, tablets and mobiles over different systems a lot more cyber requirements should have been in place and perhaps they are, but it is not working and as I see, it as this solution has been in place for close to 2 decades, the stage of 13 years of attempted transgression, the solution does not seem to be safe. 

And the end quote “Meanwhile, Storm-2603 was “assessed with medium confidence to be a China-based threat actor””, as such, we stopped away from ‘high confidence’ making this setting a larger issue. And my largest issue is when you look to find “Linen Typhoon” you get loads of links, most of them no older than 5 days. If they have been active for 13 years. I should have found a collection of articles close to a decade old, but I never found them. Not in over a dozen of pages of links. Weird, isn’t it? 

The next part is one that comes from TechCrunch (at https://techcrunch.com/2025/07/22/google-microsoft-say-chinese-hackers-are-exploiting-sharepoint-zero-day/) where we are given ‘Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day’ and this is important as a zero-day, which means “The term “zero-day” originally referred to the number of days since a new piece of software was released to the public, so “zero-day software” was obtained by hacking into a developer’s computer before release. Eventually the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them.” This implies that this issue has been in circulation for 23 years. And as this implies that there is a much larger issue as the software solution os set over iOS, Android and Windows Server. Microsoft was eager to divulge that this solution is ‘available’ to over 200 million users as of December 2020. As I see it, the danger and damage might be spread by a much larger population. 

Part of the issues is that there is no clear path of the vulnerability. When you consider the image below (based on a few speculations on how the interactions go) 

I get at least 5 danger points and if there a multiple servers involved, there will be more and as we are given “According to Microsoft, the three hacking groups were observed exploiting the zero-day vulnerability to break into vulnerable SharePoint servers as far back as July 7. Charles Carmakal, the chief technology officer at Google’s incident response unit Mandiant, told TechCrunch in an email that “at least one of the actors responsible” was a China-nexus hacking group, but noted that “multiple actors are now actively exploiting this vulnerability.”” I am left with questions. You see, when was this ‘zero day’ exploit introduced? If it was ‘seen’ as per July 7, when was the danger in this system solution? There is also a lack in the BBC article as to properly informing people. You cannot hit Microsoft with a limited information setting when the stakes are this high. Then there is the setting of what makes Typhoon sheets (linen) and the purple storm (Violet Typhoon) guilty as charged (charged might be the wrong word) and what makes the March 26th heavy weather guilty? 

I am not saying they cannot be guilty, I am seeing a lack of evidence. I am not saying that the people connecting should ‘divulge’ all, but more details might not be the worst idea. And I am not blaming Microsoft here. I get that there is (a lot) more than meets the eye (making Microsoft a Constructicon) But the lack of information makes the setting one of misinformation and that needs to be said. The optional zero day bug is one that is riddles of missing information. 

So then we get to the second article which also comes from the BBC (at https://www.bbc.com/news/articles/czdv68gejm7o) given us ‘OpenAI and UK sign deal to use AI in public services’ where we get “OpenAI, the firm behind ChatGPT, has signed a deal to use artificial intelligence (AI) to increase productivity in the UK’s public services, the government has announced. The agreement signed by the firm and the science department could give OpenAI access to government data and see its software used in education, defence, security, and the justice system.”  Microsoft put billions into this and this is a connected setting. How long until the personal data of millions of people will be out in the open for all kinds of settings? 

So as we are given “But digital privacy campaigners said the partnership showed “this government’s credulous approach to big tech’s increasingly dodgy sales pitch”. The agreement says the UK and OpenAI may develop an “information sharing programme” and will “develop safeguards that protect the public and uphold democratic values”.” So, data sharing? Why not get another sever setting and the software solution is also set to the government server? When you see some sales person give you that there will be ‘additional safeties installed’ know that you are getting bullshitted. Microsoft made similar promises in 2001 (code red) and even today the systems are still getting traversed on and those are merely the hackers. The NSA and other America governments get near clean access to all of it and that is a problem with American based servers and still here, there is only so much that the GDPR (General Data Protection Regulation) allows for and I reckon that there are loopholes for training data and as such I reckon that the people in the UK will have to set a name and shame setting with mandatory prosecution for anyone involved with this caper going all the way up to Prime Minister Keir Starmer. So when you see mentions like ““treasure trove of public data” the government holds “would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT”” I would be mindful to hand or give access to this data and not let it out of your hands. 

This link between the two is now clear. Data and transgressions have been going on since before 2001 and the two settings when data gets ‘trained’ we are likely to see more issues and when Prime Minister Keir Starmer goes “were sorry”, you better believe that the time has come to close the tap and throw Microsoft out of the windows in every governmental building in the Commonwealth. I doubt this will be done as some sales person will heel over like a little bitch and your personal data will become the data of everyone who is mentionable and they will then select the population that has value for commercial corporations and the rest? The rest will become redundant by natural selection according to value base of corporations. 

I get that you think this is now becoming ‘conspiracy based’ settings and you resent them. I get that, I honestly do. But do you really trust UK Labor after they wasted 23 billion pounds on an NHS system that went awry (several years ago). I have a lot of problems showing trust in any of this. I do not blame Microsoft, but the overlap is concerning, because at some point it will involve servers and transfers of data. And it is clear there are conflicting settings and when some one learns to aggregate data and connect it to a mobile number, your value will be determined. And as these systems interconnect more and more, you will find out that you face identity threat not in amount of times, but in identity theft and value assessment in once per X amount of days and as X decreases, you pretty much can rely on the fact that your value becomes debatable and I reckon this setting is showing the larger danger, where one sees your data as a treasure trove and the other claims “deliver prosperity for all”. That and the diminished setting of “really be done transparently and ethically, with minimal data drawn from the public” is the setting that is a foundation of nightmares mainly as the setting of “minimal data drawn from the public” tends to have a larger stage. It is set to what is needed to aggregate to other sources which lacks protection of the larger and and when we consider that any actor could get these two connected (and sell on) should be considered a new kind of national security risk. America (and UK) are already facing this as these people left for the Emirates with their billions. Do you really think that this was the setting? It will get worse as America needs to hang on to any capital leaving America, do you think that this is different for the UK? Now, you need to consider what makes a person wealthy. This is not a simple question as it is not the bank balance, but it is an overlap of factors. Consider that you have 2000 people who enjoy life and 2000 who are health nuts. Who do you think is set to a higher value? The Insurance person states the health nut (insurance without claims) or the retailer the people who spend and life live. And the (so called) AI system has to filter in 3000 people. So, who gets to be disregarded from the equation? And this cannot be done until you have more data and that is the issue. And the quotation is never this simple, it will be set to thousands of elements and these firms should not have access, as such I fear for the data making it to the outer UK grounds. 

A setting coming from overlaps and none of this is the fault of Microsoft but they will be connected (and optionally) blamed for all this, but as I personally see it the two elements that matter in this case are “Digital rights campaign group Foxglove called the agreement “hopelessly vague”” and “Co-executive Director Martha Dark said the “treasure trove of public data” the government holds” will be of significance danger to public data, because greed driven people tend to lose their heads over words like ‘treasure trove’ and that is where ‘errors are made’ and I reckon it will not take long before the BBC or other media station will trip up over the settings making the optional claim that ‘glitches were found in the current system’ and no one was to blame. Yet that will not be the whole truth will it?

So have a great day and consider the porky pies you are told and who is telling them to you, should you consider that it is me. Make sure that you realise that I am merely telling you what is out in the open and what you need to consider. Have a great day.

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

All Is Not Over

Yup that is the setting and it is a conundrum to say the least. Before I go into the explaining setting. I might need to refresh a few minds. There is no AI, Artificial doesn’t exist (yet). As I see it three components are missing and that is fine. We are making headway in this and some have one element in place. The other two are missing. So I have been speaking out against those AI ‘losers’ and it seems that no one else is listening. That’s fair. Why would you believe me over dozens of greed driven sales people. Then this morning (way too early) I saw something pass by on LinkedIn. It was brilliant, I never thought of this (I do miss parts at times) and the image below

Gives you the goods. Consider the ‘constraints’ of and actual AI. Consider the constraints of 5 AI’s. Now I take the assumption that this was all on the up and up. It is a leap, I know this. For all concerned, the poster was yanking all our chains, so you can test this yourself. Take a room with 5 strangers, ask them the simple question to pick a random number between 1 and 50 and write this on a pice of paper. Then all show them at the same time. Now if they are different people (I am referring to the old joke that all teenage boys will come up with 69 dude (and this was averted with the range 1-50) but seriously. Take 5 random people and optionally 2 might have the same answer, but for all 5 ‘proclaimed’ AI systems to give the same number is utterly impossible. 

Is it?
Well, that is the question. If they are founded on the same algorithm, there are optional gaps, then there is the setting that the data is founded on exactly the same amounts, as such I say impossible. A computer (any computer) has logics, hardware, algorithms and data. If they are all identical (which does not seem the case) the answers should be the same. But to get 5 identical answers is a drastic setting after all big tech is shedding jobs due to AI. If the image was true, the larger truth that companies need to shed jobs as they foresee a much larger economic clash. I was already on that page, but now more can do this to. Consider that this so called AI is being pushed onto support and customer care. Now consider that they all have the same flakes and errors. How many support and customer care jobs will set companies to collapse? It is an honest question. Where do you go when the company you are giving your money to is merely walking the beat towards average? A place where populism (aka the statistically most viable answer) is given?

A setting where we are merely the crunched number and not given the excellent quality support we are entitled to? I am not kidding, but this is the setting all the big tech companies are going for. All to look good on paper and that is what I see evolving. You can bitch all you like on Microsoft and Builder.ai where seemingly the AI work was done by 700 engineers. Microsoft backed the solution all whilst there was nothing to be seen on how 700 engineers were supported by hardware and software. Then we get to all systems with verifications and these elements should reveal that if AI was the real deal these systems could never have given all the same random number. 

So, all is not over. For the simple reason that if this happens, these companies need to find 61,000 people and this gives me the setting that dataconomy.com gives with “Microsoft’s Chief Commercial Officer, Judson Althoff, stated this week that the company saved over $500 million in its call center last year through the use of AI tools. This announcement follows a series of internal remarks concerning productivity gains across sales, customer service, and software engineering, as reported by Bloomberg during a recent presentation.” When these tools start bungling their job as the data becomes an issue (I see the 5 random numbers as ‘evidence’). You see, you cannot have on and not the other. I am mentioning Microsoft in this case as the quote was there, but as I see it, IBM, Amazon and Google will all have the same issue soon enough. And the first one that realizes this will get the first grasp in the 61,000 people and the last one gets the least impressive people of the bunch. And at what point will someone figure out what the price tag is on the $500 million in savings? 

It is a setting without any good end. And in the end, if the setting was faked, my conclusions are equally debatable. I will disagree as I came to this point through different means and this example was merely the icing on the cake. And I love it, because I never thought of this setting. We all miss things and I am no different. So I laugh as I saw the article as the example given was nothing short of ‘quite excellent’. As such I start the day with a smile as I enjoy being pointed at overlooking an element. That’s the person I am.

So, you all have a great day as I am starting fish day today (from young I was told Friday is fish day). Did the AI you are all embracing give you that translation and the reason why? It is a mere jab at the setting as this reenforces the verification of data. A setting I saw to be the achilles heel of that StarGate project. It is a mere $500,000,000,000 project, but that will not stop me from illustrating the situation and whilst other say that I don’t have the power to do anything. I merely counter it that these centers are unlikely to have the power to keep it going, you see power is more than an element, it becomes the biggest evil of the lot.

As stated, have a great day.

Leave a comment

Filed under Finance, IT, Science

The call for investors

That is at present the larger setting, everyone wants investors and they all tend to promise the calf with golden horns. As I see it, investing in gold mining, Oil mining and a few others are near dead certain return on investments. The larger group that will seemingly want to invest in AI, the new hype word. Still, considering that Builder.ai went from a billion plus to zilch is a nice example what  Microsoft backed solutions tend to give. You see, the larger picture that everyone is ignoring is that it was baked by Microsoft. Now, this might be OK, because Microsoft is a tech company. But consider that Builder.ai (previous known as Engineer.ai) was supposed to be all ‘good’, yet the media now reports ‘Builder.ai Collapsed After Finding Sales ‘Inflated By 300 Percent’’ This leads me to believe that there was  larger problem with this DML/LLM solution. Another source gives us ‘Builder.ai’s Collapse Exposes Deceptive AI Claims, Shocking Major Investors’ and another source gives us ‘Builder.ai collapse exposes dangers of ‘FOMO investing’ in AI’ yet that is nothing compared to what I said on November 16th 2024 in ‘Is it a public service’ (at https://lawlordtobe.com/2024/11/16/is-it-a-public-service/) where I stated “a US strategy to prevent a Chinese military tech grab in the Gulf region” and it is my insight that this is a clicking clock. One tick, one tock leading to one mishap and Microsoft pretty much gives the store to China. And with that Aramco laughingly watches from the sidelines. There is no if in question. This becomes a mere shifting timeline and with every day that timeline becomes a lot more worrying.” With the added “But several sources state “There are several reasons why General AI is not yet a reality. However, there are various theories as to what why: The required processing power doesn’t exist yet. As soon as we have more powerful machines (or quantum computing), our current algorithms will help us create a General AI” or to some extent. Marketing the spin of AI does not make it so.” You see, the entire DML/LLM is not AI, as we can see from the builder.ai setting (a little presumptuous) of me, but the setting that we get inflated sales and then the Register ended their article with “The fact that it wasn’t able to convince enough customers to pay it enough money to stay solvent should give pause to those who see generative AI as a replacement for junior developers. As the experience of the unfortunate Microsoft staffers having to deal with the GitHub Copilot Agent shows, the technology still has some way to go. One day it might surpass a mediocre intern able to work a search engine, but that day is not today.” Is perhaps merely part of the problem the “the technology still has some way to go” is astute and to the point, but it is not the larger problem. It reminded me of the old market research setting, take a bucket of data and let MANOVA sort it out. The idea that a layman can sort it out is hilarious. I have met over the last half a century less than a dozen people who know that they were doing. These people are extremely rare. So whenever I hear a student tell me that they had a good solution with MANOVA, my eyes were tearing with howls of deriving laughter. And now we see a similar setting. But the larger setting is not merely the coded setting of DML and LLM. It is the stage where data is either not verified or verified in the most shallow of situations. And now consider that stage with a 500 billion solution. Data is everything there and verification is one part of that key, a key too many are seeing aside because it is not sexy enough. 

And now we get to the investors who are in “Fear Of Missing Out”, for them I have a consolation price. You see, RigZone gave me (at https://www.rigzone.com/news/adnoc_suppliers_pledge_817mm_investment_for_uae_manufacturing-27-may-2025-180646-article/) hours ago ‘ADNOC Suppliers Pledge $817MM Investment for UAE Manufacturing’, and as I see it Oil is a near certainty of achieving ROI, and as everyone is chasing the AI dream (which of course does not exist yet) those greedy hungry money people are looking away from the certainty piggybank (as I personally see it) and that kind of investment for manufacturing will bring products, sellable products and in the petrochemical industry that is like butter with the fish. A near certainty on investment. I prefer the expression ‘near certainty’ as there is always some risk, yet as I see it, ARAMCO and ADNOC are setting the bar of achievement high enough to get that done and as I see it “ADNOC said the facilities are situated throughout the Industrial City of Abu Dhabi (ICAD), Khalifa Economic Zones Abu Dhabi (KEZAD), Dubai Industrial Park, Jebel Ali Free Zone (JAFZA), Sharjah Airport International Free Zone (SAIF Zone), and Umm Al Quwain. They will generate over 3,500 high-skilled jobs in the private sector and produce a diverse array of industrial goods such as pressure vessels, pipe coatings, and fasteners.” As such the only danger is that ADNOC will not be able to fill the positions and that is at present the easiest score to settle. 

So as we see the call for investors coming from the sound of a dozen bugles, remember that the old premise that getting the call from a setting that works beats the golden horns that some promise and the investors will need another setting (or so I figure). And in the end, the larger question is why builder.ai was backed inn the first place. Microsoft has a setting with OpenAI and as one source gives me “Microsoft and OpenAI have a significant partnership, where Microsoft is a major investor and supports OpenAI’s advancements, and OpenAI provides access to powerful language models through Microsoft’s Azure platform. This partnership enables Azure OpenAI Service, which provides access to OpenAI’s models for businesses, and it also includes a revenue-sharing agreement.” I cannot vouch for the source, but the idea is when this is going on, why go to it with builder.ai? And was builder.ai vetted? The entire setting is raising more questions than I normally would have (sellers have their own agenda and including Microsoft in this is ‘to them’ a normal setting) I do not oppose that, but when we see this interaction, I wonder how dangerous that Stargate will be and $500,000,000,000 ain’t hay. 

And going back to ADNOC we see “ADNOC’s commercial agreements under the In-Country Value (ICV) program have enabled facilities that allow businesses to benefit from diverse commercial opportunities, the company said. The ICV program aims to manufacture AED90 billion ($24.5 billion) worth of products locally in its procurement pipeline by 2030.” More impressive is the quote “ADNOC’s ICV program has contributed AED242 billion ($65.8 million) to the UAE economy and created 17,000 jobs for UAE nationals since 2018, according to the company.” You see, such a move makes sense as the UAE produces 3.22 million barrels per day, that has been achieved from 2024 onward and some say that they exceeded their quota (by how much is unknown to me). But that makes sense as an investment, the entire fictive AI setting does not and ever since the builder.ai setting it makes a lot less sense, if not for the simple reason that no one can clearly state where that billion plus went, oh and how many investments collapsed and who were those investors. Simple questions really.

Have a great day and try not to chase too many Edsel’s with your investment portfolio.

Leave a comment

Filed under Finance, IT, Media, Science

New short term thinking

The news hit me somewhere yesterday. I got it by means of a LinkedIn mention, and it gave me reason to pause. Here is one version of that news (at https://techwireasia.com/2025/04/microsoft-pauses-key-builds-in-indonesia-us-and-uk-amid-infrastructure-review/) with the mention ‘Microsoft pauses data centre investment in Indonesia, US, and UK’, and here we see the byline “Microsoft pauses or delays data centre projects in the UK, US, and Indonesia.”, it is my view that they cannot afford this setting. You might have heard the American expression, “Go big or go home” and I think that Microsoft is about to go home. You see, I have forever had the clear opinion that there is no AI. I call it NIP (Near Intelligent Parsing), the setting that if too many start accepting the setting that I was always right (which comes from the clear setting that there is one AI station and it was given to us by Alan Turing) the people will realise that there is no AI and it comes down to programming and a programmer. That setting puts Microsoft in hot water for a lot of heavy water (to be poured over their heads). And lets be clear, a side you can confirm with mere logical thinking. A data Centre is a long term setting. No matter what you put in the White House (by some called the village idiot) whatever this administration is, it is short term and a data centre is long term and that so called hype around their AI should never waver. You see, this short term action (read: knee jerk reaction) implies short term planning and that is where they all get into hot waters. Why did you think that I made mention that Google needs to put a data centre in Iceland and consolidate their thinking into geo thermal reactors? (Reactors might not be the right word). A setting where ceramic tiles (or cylinders) surrounding new constructions that is not unlike a nuclear reactor, but the reactor is all around them, not Uranium rods, the Lava (or Magma) is the powerful and as it is merely bleeding the radiation, the fuel never dissipates and never ending energy is theirs. For all these parties looking of creating data centers (as far as I can see around 50 in total globally) they will all require energy and as one data centre takes energy close to a amount a small city does, we will get energy issues a lot sooner than we think.

Did Microsoft think this through? Pretty sure they did and their conclusion is that they cannot spend billion on data centers. So at the same time as we are given “Rivals Oracle and OpenAI ramp up investments”, I come to the conclusion that Microsoft can no longer afford the bills their ego’s committed themselves to. Feel free to disagree, but they set out this AI ‘vibe’ and own 49% of OpenAI, so why close down their Data Centers whilst they ‘own’ one of the ramp up partners? They are figuring out that they are too deeply committed. And as the world realizes that NIP is not the same as actual AI, they fear what is coming next.

So you decide what to make of the stage of “Microsoft has acknowledged changing its strategy but declined to provide details about specific projects. “We plan our data centre capacity needs years in advance to ensure we have sufficient infrastructure in the right places,” a Microsoft spokesperson said. “As AI demand continues to grow, and our data centre presence continues to expand, the changes we have made demonstrates the flexibility of our strategy.”” As I see it, it is an answer, but not the one that touches on this. I come with questions as ‘What growth?’ All this sets the need for some lowered activity, not pausing, unless you know what comes next and there is a larger setting with Oracle, Tencent and Huawei, I know there is a Swedish centre as well but I forgot the name. All these are ramping up, but Microsoft is pausing? That makes no sense unless there is another reason and my thought of “They can no longer afford it” takes another gander and when we consider that they paused “North Dakota, Illinois, Wisconsin, the UK midlands and Jakarta, Indonesia.” That implies something is going on and when we combine this with “Microsoft cuts data centre plans and hikes prices in push to make users carry AI costs” (source: The Conversation, March 3rd 2025) these elements together implies (imply, not proven) tells me that there is a funding setting for Microsoft. Combine that with the lovely voiced fact of “OpenAI brought in US$3.7 billion in revenue – but spent almost US$9 billion, for a net loss of around US$5 billion.” (Source: the Conversation) we see another failed setting and that failure gets to be bigger. As Amazon, Google, Oracle, Tencent and Huawei steam ahead getting larger data centers and ready long before Microsoft is there means less revenue for Microsoft. I did say that they could go big or go home? I reckon that Microsoft already lost 6 times on front settings and they lost to Amazon, Apple (twice), Sony, Adobe, Google, and IBM. I should add Huawei to that list but they already bungled that setting before Huawei became an actual competitor. A simple deduction from little stupid old me. 

So whatever you do, you might look into the trust you gave Microsoft and see that you are not left with an empty shell. Oh, and to prove that I am not anti-Microsoft you need to know that they did corner the spreadsheet market (Excel) and the flight Simulator market. Microsoft did some things good, but when it comes to the spin setting of vibes they need to reassess their situation.

Have a great day, it’s midweek now. I am happily in the next day.

Leave a comment

Filed under Finance, IT, Media, Science

When words become data

There is an uneasy setting. I get that. You see AI does not exist, and whilst we all see the AI settings develop and some will be setting (read: gambling) 500 billion dollars on that topic, we now see that META is banking on a 200 billion on the stage. But what is this stage? We can tun to Reuters  who gives us ‘Meta in talks for $200 billion AI data center project, The Information reports’ (at https://www.reuters.com/technology/meta-talks-200-billion-ai-data-center-project-information-reports-2025-02-26/) where we are given “A Meta spokesperson denied the report, saying its data center plans and capital expenditures have already been disclosed and that anything beyond that is “pure speculation”” However, when we set the stage on a different shoe we see another development. You see, when we think of this in non-AI terms we get that a Data Centre generally ranges from $10 million to $200 million with a typical commercial data center costing around $10-12 million per megawatt of power capacity; smaller data centers can cost as low as $200,000 to build. So when we consider that the upper range of a data centre is $200 million. So what kind of a data centre gives the need to be a thousand times bigger? Now, consider that there are enough people clarifying that AI does not exit. I see AI what some people call True AI and that springs from the mind of Alan Turing. He set the premise of AI half a century ago. And whilst some of the essential hardware is ready, there are still parts missing. Yet what some now call AI is merely Deeper Machine Learning and it gets help from an LLM. This setting requires huge amounts of data, so when you consider that that data comes from a data centre. What on earth is META up to? When need a data centre a thousand times bigger? The only size that makes sense for 200 billion is a data centre that could gobble up whatever Microsoft has as well as Google’s data centers in one great swoop and that is merely the beginning.

Speculation
The next part is speculation, I openly admit that. So when (not if) America defaults on their loans we get an implosion of current wealth and the new wealth will be data. Data will in the near future be the currency that all other parties accept. As such Is META preparing for a new currency? As I see it the simplest setting is whomever has the most data will be the richest person on the planet and that would make sense, that explains Trump’s 500 billion for a data centre and now META is following suit. You see Zuckerberg is really intelligent. I saw that setting 5 years before Facebook existed, but my boss told me that my idea was ludicrous, it would never work. Now we see my initial idea spread all over the planet with every marketing organisation on the planet chomping at the bit to get their slice of pie. So Zuckerberg does have the cajones and the drive to proceed. When data is currency they will be one of the few players in the new economy. And when you take my speculation (possibly even insightful presumption) these data centers make sense and being able to set predictive data learned from active and historical data makes sense in a very real way. Predictive data will be the wave of the future. It still is not AI, but it is in very real ways the next step in data needs. Predictive analytics set the path of this wave 1-2 decades ago. And now we see more data transformations and when the main roads are dealt with the niche markets can be predicted and seen in very real ways.

And the stage is more real than you can see. When people like Zuckerberg are cashing out to get their data centers up and running, there is a real drive to be first to cash in. As I see it, my next step would be to score a job with a data centre doing mere maintenance and support work. You see, as all these big players evolve their needs, their manpower will need to come from infrastructures that these data centers require. So support and power will have the greatest staffing needs in the next decade. Just my thoughts on the matter.

Have a lovely day today

Leave a comment

Filed under Finance, IT, Media, Science

And the bubble said ‘Bang’

This is what we usually see, or at times hear as well. Now I am not an AI expert, not even a journeyman in the ways of AI, But the father of AI namely Alan Turing stated the setting of AI. He was that good as he set the foundation of AI in the 50’s, half a century before we were able to get a handle on this. Oh, and in case you forget what he looks like, he has been immortalised on the £50 note.

And as such I feel certain that there is no AI (at present) and now this bubble comes banging on the doors of big-tech as they just lost a trillion dollars in market value. Are you interested in seeing what that looks like? Well see below and scratch the back of your heads.

We start with Business Insider (at https://markets.businessinsider.com/news/stocks/tech-stock-sell-off-deepseek-ai-chatgpt-china-nvidia-chips-2025-1) where we are given ‘DeepSeek tech wipeout erases more than $1 trillion in market cap as AI panic grips Wall Street’ and I find it slightly hilarious as we see “AI panic”, you see, bubbles have that effect on markets. This takes me back to 2012 when the Australian Telstra had no recourse at that point to let the waves of 4G work for them (they had 3.5G at best) so what did they do? They called the product 4G, problem solved. I think they took some damage over time, but they prevented others taking the lead as they were lagging to some extent. Here in this case we are given “US stocks plummeted on Monday as traders fled the tech sector and erased more than $1 trillion in market cap amid panic over a new artificial intelligence app from a Chinese startup.” Now let me be clear, there is no AI. Not in America and not in China. What both do have is Deeper Machine Learning and LLM’s and these parts would in the end be part of a real AI. Just not the primary part (see my earlier works). Why has happened (me being speculative) is that China had an innovative idea of Deeper Machine Learning and package this innovatively with LLM modules so that the end result would be a much more efficient system. The Economic Times (at https://economictimes.indiatimes.com/markets/stocks/news/worlds-richest-people-lose-108-billion-after-deepseek-selloff/articleshow/117615451.cms) gives us ‘World’s richest people lose $108 billion after DeepSeek selloff’ what is more prudent is “DeepSeek’s dark-horse entry into the AI race, which it says cost just $5.6 million to develop, is a challenge to Silicon Valley’s narrative that massive capital spending is essential to developing the strongest models.” So all these ‘vendors’ and especially President Trump who stated “Emergence of cheaper Chinese rival has wiped $1tn off the value of leading US tech companies” (source: the Guardian). And with the Stargate investment on the mark for about 500 billion dollars it comes as a lightning strike. I wonder what the world makes of this. In all honesty I do not know what to believe and the setting of DeepSeek the game will change. In the first there are dozens of programers who need to figure out how the cost cutting was possible. Then there is the setting of what DeepSeek can actually do and here is the kicker. DeepSeek is free as such there will be a lot of people digging into that. What I wonder is what data is being collected by Chinese artificial intelligence company Hangzhou DeepSeek Artificial Intelligence Co., Ltd. It would be my take on the matter. When something is too cheap to be true, you better believe that there is a snag on the road making you look precisely in the wrong direction. I admit it is the cynic in me speaking, but the stage that they made a solution for 6 million (not Lee Majors) against ChatGPT coming at 100 million, the difference is just too big and I don’t like the difference. I know I might be all wrong here, but that is the initial intake I take in the matter. 

If it all works out there is a massive change in the so called AI field. A Chinese party basically sunk the American opposition. In other news, there is possibly reason to giggle here. You see, Microsoft Invested Nearly $14 Billion In OpenAI and that was merely months ago and now we see that  someone else did it at 43% of the investment and after all the hassles they had (Xbox) they shouldn’t be spending recklessly I get it, they merely all had that price picture and now we see another Chinese firm playing the super innovator. It is making me giggle. In opposition to this, we see all kind of player (Google, IBM, Meta, Oracle, Palantir) playing a similar game of what some call AI and they have set the bar really high, as such I wonder how they will continue the game if it turns out that DeepSeek really is the ‘bomb’ of Deeper Machine Learning. I reckon there will be a few interesting weeks coming up. 

Have fun, I need to lie still for 6 hours until breakfast (my life sucks).

3 Comments

Filed under Finance, IT, Media, Politics, Science

Out of the blue

That is what happened. I had a stream of ideas out of the blue. I do not know what fuelled this. Was it reading about the failures of Ubisoft? Was it another setting? My mind went racing and I went back to 1995 and Tia Carrere. In that year she was part of The Daedalus Encounter. It was a fun game and I had fun laying it. But then a thought came to me. That game in 2024 could open other doors. Doors opened through machine learning and deeper machine learning (AI does not yet exist). The track my mind went through was interesting. You see, the movie world made rules for (what they call) AI. But this setting might not completely apply to games. 

Now consider the first stage of creating these kind of games using the technology complemented with Unreal Engine 5. We can make new versions of Rama, Infocom games, but now not as text games. More like Zork nemesis, with actors and actresses. Infocom created more than 20 games and they could now entice a much larger following. As the games develop new technology would also develop in creating games. The larger fun of this is that many more developers will get a handle on this form of game development. 

That brought me to the next level. In 1984 The Dallas Quest was developed. As such Datasoft created “one of the best games out on the CBM 64” and it held sway over pretty much the entire gaming community, even those who didn’t follow Dallas (example: me). We now have the technology for streaming systems to hold the sway of all who love this level of games. That wasn’t the only setting. You see players like Netflix could optionally create a new level of games using these technologies. The setting of of these new options could set in motion a new form of gaming. Consider what was. And now take another direction. Creation of these kind of games using TV series. Grimm, Babylon 5, Charmed, Buffy, Doll House and several other series that have been discontinued. Now consider the implementation of ChatGPT, and with a library for every character of that series. Now we get a new technology. A game where the player can be any character in that series and the interactions will shape the ‘episode’ of that game. That trend could be pushed forward. Now consider another venue of these games. Egyptian Musalsalat: A Social Construction of Reality has strength all over the Arabic world. Now take these elements and build the new template. An interactive game where the player decides on the route of the episode. In The Dallas Quest we needed to make choice, like finding the football tickets in the lobby, if not you get stuck in the game. Now machine learning will be able to avoid getting stuck. And the game can evolve even further. Consider the setting that Grimm has, millions of fans still love this series. Now they can continue their TV fling in this new direction. Consider the streaming solution and consider that I gave the option of 200 million consoles with the directions before I came up with this. Now it could become a whole new dimension of gaming. 

Oh, and whilst you contemplate how Ubisoft blew game after game and delay after delay I came up with this new idea (within two hours). Don’t get me wrong, this will be a complex undertaking and the idea to use the Infocom and the Dallas Quest first enables this technology to grow and to adapt to some sandbox approach. I believe this could entice millions more to the gaming population and it has options over time. There is even the idea that former adventures could be evolved into new versions on a new template in a new shape with new possibilities. What a difference a few hours make.

Have a great day.

Leave a comment

Filed under Gaming, IT, movies

Poised to deliver critique

That is my stance at present. It might be a wrong position to have, but it comes from a setting of several events that come together at this focal point. We all have it, we are all destined to a stage of negativity thought speculation or presumption. It is within all of us and my article 20 hours ago on Microsoft woke something up within me. So I will take you on a slightly bumpy ride.

The first step is seen through the BBC (at https://www.bbc.com/worklife/article/20240905-microsoft-ai-interview-bbc-executive-lounge) where we get ‘Microsoft is turning to AI to make its workplace more inclusive’ and we are given “It added an AI powered chatbot into its Bing search engine, which placed it among the first legacy tech companies to fold AI into its flagship products, but almost as soon as people started using it, things went sideways.” With the added “Soon, users began sharing screenshots that appeared to show the tool using racial slurs and announcing plans for world domination. Microsoft quickly announced a fix, limiting the AI’s responses and capabilities.” Here we see the collective thoughts an presumptions I had all along. AI does not (yet) exist. How do you live with “Microsoft quickly announced a fix”? We can speculate whether the data was warped, it was not defined correctly. Or it is a more simple setting of programmer error. And when an AI is that incorrect does it have any reliability? Consider the old data view we had in the early 90’s “Garbage In, Garbage Out”. Then. We are offered “Microsoft says AI can be a tool to promote equity and representation – with the right safeguards. One solution it’s putting forward to help address the issue of bias in AI is increasing diversity and inclusion of the teams building the technology itself”, as such consider this “promote equity and representation – with the right safeguards” Is that the use of AI? Or is it the option of deeper machine learning using an LLM model? An AI with safeguards? Promote equity and representation? If the data is there, it might find reliable triggers if it knows where or what to look for. But the model needs to be taught and that is where data verification comes in, verified data leads to a validated model. As such to promote equity and presentation the dat needs to understand the two settings. Now we get the harder part “The term “equity” refers to fairness and justice and is distinguished from equality: Whereas equality means providing the same to all, equity means recognising that we do not all start from the same place and must acknowledge and make adjustments to imbalances.” Now see the term equity being used in all kinds of places and in real estate it means something different. Now what are the chances people mix these two up? How can you validate data when the verification is bungled? It is the simple singular vision that Microsoft people seem to forget. It is mostly about the deadline and that is where verification stuffs up. 

Satya Nadella is about technology that understands us and here we get the first problem. When we consider that “specifically large-language models such as ChatGPT – to be empathic, relevant and accurate, McIntyre says, they needs to be trained by a more diverse group of developers, engineers and researchers.” As I see it, without verification you have no validation and you merely get a bucket of data where everything is collected and whatever the result of it becomes an automated mess, hence my objection to it. So as we are given “Microsoft believes that AI can support diversity and inclusion (D&I) if these ideals are built into AI models in the first place”, we need to understand that the data doesn’t support it yet and to do this all data needs to be recollected and properly verified before we can even consider validating it. 

Then we get article 2 which I talked about a month ago the Wired article (at https://www.wired.com/story/microsoft-copilot-phishing-data-extraction/) we see the use of deeper machine learning where we are given ‘Microsoft’s AI Can Be Turned Into an Automated Phishing Machine’, yes a real brain bungle. Microsoft has a tool and criminals use it to get through cloud accounts. How is that helping anyone? The fact that Microsoft did not see this kink in their trains of thought and we are given “Michael Bargury is demonstrating five proof-of-concept ways that Copilot, which runs on its Microsoft 365 apps, such as Word, can be manipulated by malicious attackers” a simple approach of stopping the system from collecting and adhering to criminal minds. Whilst Windows Central gives us ‘A former security architect demonstrates 15 different ways to break Copilot: “Microsoft is trying, but if we are honest here, we don’t know how to build secure AI applications”’ beside the horror statement “Microsoft is trying” we get the rather annoying setting of “we don’t know how to build secure AI applications”. And this isn’t some student. Michael Bargury is an industry expert in cybersecurity seems to be focused on cloud security. So what ‘expertise’ does Microsoft have to offer? People who were there 3 weeks ago were shown 15 ways to break copilot and it is all over their 365 applications. At this stage Microsoft wants to push out broken if not an unstable environment where your data resides. Is there a larger need to immediately switch to AWS? 

Then we get a two parter. In the first part we see (at https://www.crn.com.au/news/salesforces-benioff-says-microsoft-ai-has-disappointed-so-many-customers-611296) CRN giving us the view of Marc Benioff from Salesforce giving us ‘Microsoft AI ‘has disappointed so many customers’’ and that is not all. We are given ““Last quarter alone, we saw a customer increase of over 60 per cent, and daily users have more than doubled – a clear indicator of Copilot’s value in the market,” Spataro said.” Words from Jared Spataro, Microsoft’s corporate vice president. All about sales and revenue. So where is the security at? Where are the fixes at? So we are then given ““When I talk to chief information officers directly and if you look at recent third-party data, organisations are betting on Microsoft for their AI transformation.” Microsoft has more than 400,000 partners worldwide, according to the vendor.” And here we have a new part. When you need to appease 400,000 partners things go wrong, they always do. How is anyones guess but whilst Microsoft is all focussed on the letter of the law and their revenue it is my speculated view that corners are cut on verification and validation (a little less on the second factor). And the second part in this comes from CX Today (at https://www.cxtoday.com/speech-analytics/microsoft-fires-back-rubbishes-benioffs-copilot-criticism/) where we are given ‘Microsoft Fires Back, Rubbishes Benioff’s Copilot Criticism’ with the text “Jared Spataro, Microsoft’s Corporate Vice President for AI at Work, rebutted the Salesforce CEO’s comments, claiming that the company had been receiving favourable feedback from its Copilot customers.” At this point I want to add the thought “How was that data filtered?” You see the article also gives us “While Benioff can hardly be viewed as an objective voice, Inc. Magazine recently gave the solution a D – rating, claiming that it is “not generating significant revenue” for its customers – suggesting that the CEO may have a point” as well as “despite Microsoft’s protestations, there have been rumblings of dissatisfaction from Copilot users” when the dust settles, I wonder how Microsoft will fare. You see I state that AI does not (yet) exist. The truth is that generative AI can have a place. And when AI is here, when it is actually here not many can use it. The hardware is too expensive and the systems will need close to months of testing. These new systems that is a lot, it would take years for simple binary systems to catch up. As such these LLM deeper machine learning systems will have a place, but I have seen tech companies fire up sales people and get the cream of it, but the customers will need a new set of spectacles to see the real deal. The premise that I see is that these people merely look at the groups they want, but it tends to be not so filtered and as such garbage comes into these systems. And that is where we end up with unverified and unvalidated data points. And to give you an artistic view consider the following when we use a one point perspective that is set to “a drawing method that shows how things appear to get smaller as they get further away, converging towards a single “vanishing point” on the horizon line” So that drawing might have 250,000 points. Now consider that data is unvalidated. That system now gets 5,000 extra floating points. What happens when these points invade the model? What is left of your art work? Now consider that data sets like this have 15,000,000 data points and every data point has 1,000,000 parameters. See the mess you end up with? Now go look into any system and see how Microsoft verifies their data. I could not find any white papers on this. A simple customer care point of view, I have had that for decades and Jared Spataro as I see it seemingly does not have that. He did not grace his speech with the essential need of data verification before validation. That is a simple point of view and it is my view that Microsoft will come up short again and again. So as I (simplistically) see it. Is by any chance, Jared Spataro anything more than a user missing Microsoft value at present?

Have a great day.

1 Comment

Filed under Finance, IT, Media, Science

Not changing sides

It was a setting I found myself in. You see, there is nothing wrong with bashing Microsoft. The question at times is how long until the bashing is no longer a civic duty, but personal pleasure. As such I started reading the article (at https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.70697010) where we see ‘New York Times sues OpenAI, Microsoft for copyright infringement’ it is there where we are given a few part. The first that caught my eye was ““Defendants seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” according to the complaint filed Wednesday in Manhattan Federal Court.” To see why I am (to some extent) siding with Microsoft on this is that a newspaper is only in value until it is printed. At that point it becomes public domain. Now the paper has a case when you consider the situation that someone is copying THEIR result for personal gain. Yet, this is not the case here. They are teaching a machine learning model to create new work. Consider that this is not an easy part. First the machine needs to learn ALL the articles that a certain writer has written. So not all the articles of the New York Times. But separately the articles from every writer. Now we could (operative word) to a setting where something alike is created on new properties, events that are the now. So that is no longer a copy, that is an original created article in the style of a certain writer. 

As such when we see the delusional statement from the New York Times giving us “The Times is not seeking a specific amount of damages, but said it believes OpenAI and Microsoft have caused “billions of dollars” in damages by illegally copying and using its works.” Delusional for valuing itself billions of dollars whilst their revenue was a lot less than a billion dollars. Then there is the other setting. Is learning from public domain a crime? Even if it includes the articles of tomorrow, is it a crime then? You see, the law is not ready for machine learning algorithm. It isn’t even ready for the concept of machine learning at present. 

Now, this doesn’t apply to everything. Newspapers are the vocalisations of fact (or at least used to be). The issues on skating towards design patents is a whole other mess. 

As such OpenAi and Microsoft are facing an uphill battle, yet in the case of the New York Times and perhaps the Washington Post and the Guardian I am not so sure. You see, as I see it, it hangs on one simple setting. Is a published newspaper to be regarded as Public Domain? The paper is owned, as such these articles cannot be resold, but there is the grinding cog. It was never used as such. It was a learning model to create new original work and that is a setting newspapers were never ready for. None of these media laws will give coverage on that setting. This is probably why the NY Times is crying foul by the billions. 

The law in these settings is complex, but overall as a learning model I do not believe the NY Times has a case. and I could be wrong. My setting is that articles published become public domain to some degree. At worst OpenAI (Microsoft too) would need to own one copy of every newspaper used, but that is as far as I can go. 

The dangers here is not merely that this is done, it is “often taken from the internet” this becomes an exercise on ‘trust but verify’. There is so much fake and edited materials on the internet. One slip up and the machine learning routines fail. So we see not merely the writer. We see writer, publication, time of release, path of release, connected issues, connected articles all these elements hurt the machine learning algorithm. One slip up and it is back to the drawing board teaching the system often from scratch.

And all that is before we consider that editors also change stories and adjust for length, as such it is a slightly bigger mess than you consider from the start. To see that we need to return to June this year when we were given “The FTC is demanding documents from Open AI, ChatGPT’s creator, about data security and whether its chatbot generates false information.” If we consider the impact we need to realise that the chatbot does not generate false information, it was handed wrong and false information from the start the model merely did what the model was given. That is the danger. The operators and programmers not properly vetting information.

Almost the end of the year, enjoy.

Leave a comment

Filed under IT, Law, Media, Science

How stupid could stupid become?

Yup that was the question and it all started with an article by the CBC. I had to read it twice because I could not believe my eyes. But yes, I did not read it wrong and that is where the howling began. Lets start at the beginning. It all started with ‘Want a job? You’ll have to convince our AI bot first’, the story (at https://www.cbc.ca/news/business/recruitment-ai-tools-risk-bias-hidden-workers-keywords-1.6718151) gives us “Ever carefully crafted a job application for a role you’re certain that you’re perfect for, only to never hear back? There’s a good chance no one ever saw your application — even if you took the internet’s advice to copy-paste all of the skills from the job description” this gives us a problem on several factors, but the two I am focussing on is IT and recruiters. IT is the first. AI does not exist, not yet at least. What you see are all kinds of data driven tools, primarily set to Machine Learning and Deeper Machine Learning. First off, these tools are awesome. In their proper setting they can reduce workloads and automate CERTAIN processes.

But these machines cannot build, they cannot construct and they cannot deconstruct. To see whether a resume and a position match together you need the second tier, the recruiter (or your own HR department). There are skills involved and at times this skill is more of an art. Seeing how much alike a person is to the position is an art. You can test via a resume of minimum skills are available. Yes, at times it take a certain amount of Excel levels, it might take SQL skill levels or perhaps a good telephone voice. A good HR person (or recruiter) can see this. Machine Learning will not ever get it right. It might get close. 

So whilst we laugh at these experts, the story is less nice, the dangers are decently severe. You see, this is some side of cost reduction, all whilst too many recruiters have no clue what they are doing, I have met a boatload of them. They will brush it off with “This is what the client wants” but it is already too late, they were clueless from the start and it is getting worse. The article also gives  us a nice handle “They found more than 90 per cent of companies were using tools like ATS to initially filter and rank candidates. But they often weren’t using it well. Sometimes, candidates were scored against bloated job descriptions filled with unnecessary and inflexible criteria, which left some qualified candidates “hidden” below others the software deemed a more perfect fit.” It is the “they often weren’t using it well”, you see any machine learning is based on a precise setting, if the setting does not fit, the presented solution is close to useless. And it goes from bad to worse. You see it is seen with “even when the AI claims to be “bias-free.”” You see EVERY Machine learning solution is biased. Bias through data conversion (the programmer), bias through miscommunication (HR, executive and programmer misalignment) and that list goes on. If the data is not presented correctly, it goes wrong and there is no turning back. As such we could speculate that well over 50% of firms using ATS are not getting the best applicant, they are optionally leaving them to real recruiters, and as such handing to their competitors. Wouldn’t that be fun? 

So when we get to “So for now, it’s up to employers and their hiring teams to understand how their AI software works — and any potential downsides” which is a certain way to piss your pants laughing. It is a more personal view, but hiring teams tend to be decently clueless on Machine Learning (what they call AI). That is not their fault. They were never trained for this, yet consider what they are losing out of? Consider a person who never had military training, you now push them in a war stage with a rifle. So how long will this person be alive? And when this person was a scribe, how will he wield his weapon? Consider the man was a trompetist and the fun starts. 

The data mismatches and keeps this person alive by stating he is not a good soldier, lucky bastard. 

The foundation is data and filling jobs is the need of an HR department. Yes, machine learning could optionally reduce the time going through the resume’s. Yet bias sets in at age, ageism is real in Australia and they cannot find people? How quaint, especially in an aging population. Now consider what an executive knows about a job (mostly any job) and what HR knows and consider how most jobs are lost to translation in any machine learning environment. 

Oh, and I haven’t even considered some of these ‘tests’ that recruiters have. Utterly hilarious and we are given that this is up to what they call AI? Oh, the tears are rolling down my cheeks, what fun today is, Christmas day no less. I haven’t had this much fun since my fathers funeral.

So if you wonder how stupid can get, see how recruiters are destroying a market all by themselves. They had to change gears and approach at least 3 years ago. The only thing I see are more and more clueless recruiters and they are ALL trying to fill the same position. And the CBC and their article also gives us this gem “it’s also important to question who built the AI and whose data it was trained on, pointing to the example of Amazon, which in 2018 scrapped its internal recruiting AI tool after discovering it was biased against female job applicants.” So this is a flaw of the lowest level, merely gender. Now consider that recruiters are telling people to copy LinkedIn texts for their resume. How much more bias and wrong filters will pop up? Because that is the result of a recruiter too, they want their bonus and will get it anyway they can. So how many wrong hires have firms made in the last year alone? Amazon might be the visible one, but that list is a lot larger than you think and it goes to the global corporate top. 

So consider what you are facing, consider what these people face and laugh, its Christmas.

Enjoy today.

Leave a comment

Filed under Finance, IT, Science