Tag Archives: ChatGPT

When Grok gets it wrong

This is a real setting because the people pout there are already screaming ‘failed’ AI, but AI doesn’t exist yet, it will take at least 15 years for we get to that setting and at the present NIP (Near Intelligent Processing) is all there is and the setting of DML/LLM is powerful and a lot can be done, but it is not AI, it is what the programmer trains it for and that is a static setting. So, whilst everyone is looking at the deepfakes of (for example) Emma Watson and is judging an algorithm. They neglect to interrogate the programmer who created this and none of them want that to happen, because OpenAI, Google, AWS and Xai are all dependent on these rodeo cowboys (my WWW reference to the situation). So where does it end? Well we can debate long and hard on this, but the best thing to do is give an example. Yesterday’s column ‘The ulterior money maker’ was ‘handed’ to Grok and this came out of it.

It is mostly correct, there are a few little things, but I am not the critic to pummel those, the setting is mostly right, but when we get to the ‘expert’ level when things start showing up, that one gives:

Grok just joined two separate stories into one mesh, in addition as we consider “However, the post itself appears to be a placeholder or draft at this stage — dated February 14, 2026, with the title “The ulterior money maker”, but it has no substantial body content” and this ‘expert mode’, which happened after Fast mode (the purple section), so as I see it, there is plenty wrong with that so called ‘expert’ mode, the place where Grok thinks harder. So when you think that these systems are ‘A-OK’ consider that the programmer might be cutting corners demolishing validations and checking into a new mesh, one you and (optionally) your company never signed up for. Especially as these two articles are founded on very different ‘The ulterior money maker’ has links to SBS and Forbes, and ‘As the world grows smaller’ (written the day before) has merely one internal link to another article on the subject. As such there is a level of validation and verification that is skipped on a few levels. And that is your upcoming handle on data integrity?

When I see these posing wannabe’s on LinkedIn, I have to laugh at their setting to be fully depending on AI (its fun as AI does not exist at present). 

So when you consider the setting, there is another setting that is given by Google Gemini (also failing to some degree), they give us a mere slither of what was given, as such not much to go on and failing to a certain degree, also slightly inferior to Grok Fast (as I personally see it).

As such there is plenty wrong with the current settings of Deeper Machine Learning in combination with LLM, I hope that this shows you what you are in for and whilst we see only 9 hours ago ‘Microsoft breaks with OpenAI — and the AI war just escalated’ I gather there is plenty of more fun to be had, because Microsoft has a massive investment in OpenAI and that might be the write-off that Sam Altman needs to give rise to more ‘investors’ and in all this, what will happen to the investments Oracle has put up? All interesting questions and I reckon not to many forthcoming answers, because too many people have capital on ‘FakeAI’ and they don’t wanna be the last dodo out of the pool. 

Have a great day.

Leave a comment

Filed under IT, Media, Science

The ulterior money maker

That is the setting, but what is true and what is ‘planned’ is another matter. We have several settings, but let me start by giving you two parts before I start ‘presuming’ stuff, so you will be able to keep up. /The first one was the one I got last, but it matters. SBS (at https://www.sbs.com.au/news/article/trumps-america-wants-access-to-australian-biometric-data/ftomgcy5j) gives us ‘Australians’ personal data could soon be accessible by US agencies. Here’s why’ and we are given “Now, reports are emerging that the Australian government may be compelled to share Australians’ biometric data and other information with the US and its agencies, including ICE, as part of a compliance measure to vet travelers entering the country under its Visa Waiver Program (VWP). The Australian government, via the Department of Home Affairs, has so far declined to confirm whether it is currently complying with the demands or has plans to negotiate a data-sharing agreement. That’s despite the US setting a deadline of 31 December for finalising agreements with countries participating in its visa-free travel arrangement, including Australia.” This was nothing new to me, but as it is ‘now’ officially recognised, it adheres to a different field as well. We are further given “The proposed changes to the US’ vetting processes would primarily affect Australians eligible for the ESTA visa waiver program, which allows travelers from 42 countries to visit the US for up to 90 days visa-free, provided they first obtain an electronic travel authorisation.” I personally do not think it will end there, but it is the start that the United States desire, because if the first hurdle is passed, the rest becomes easy and it connects to the second article, even though you might not think that it does. The second article comes from Forbes (at https://www.forbes.com/sites/kateoflahertyuk/2026/02/09/the-new-chatgpt-caricature-trend-comes-with-a-privacy-warning/) with the setting of ‘The New ChatGPT Caricature Trend Comes With A Privacy Warning’ where we see “The ChatGPT caricatures are created by entering a seemingly benign prompt into the AI tool: “Create a caricature of me and my job based on everything you know about me.” The AI caricatures are pretty cool, so it’s easy to see why people are jumping on this viral trend. But to create the caricatures, ChatGPT needs a lot of data about you.” With the added “It means you are handing over a bunch of potentially sensitive data to ChatGPT — all to jump on a viral trend that will soon be forgotten. But that data could potentially be out there forever, at least on the social media platforms you post it on.” 

Source: Forbes

Now consider the new setting and this becomes laughably easy with the 700 platforms being added this year (source: Cleanview) they told us “the United States leads global data center growth with 577+ operating data centers and over 660+ planned or under-construction projects” that is the setting and I have warned people for this setting for over 30 years. Matching and adding data has been possible since the 80’s, but for the longest time we just never had the data technology (like massive hard drives) now we get suppliers like Kioxia with 245TB drives, with 1 petabyte in a few years. But for now you could use 4 of those bad boys and you are already there. Now to the larger setting. Do you think that the USA needs that much data in data centres to regulate the weather? 

It comes to the stage where the Dutch journalist Luc Sala is proven correct. We are headed towards a setting of the “have’s” and the “have not’s” (1988/1989) the market is already there now, the rest is trying to catch up. So we get a world the separates the enablers from the consumers and when we get that, we merely need to define the cut off point of the consumers. This is the world where those who do not consume enough become a liability to that system. He predicted it and now we see the execution towards that point and weirdly enough you are all helping the United States complete that setting, in one hand the government enabling the biometrics collection and in the pother hand the people trying to appease its ‘fanbase’ by handing over whatever they need towards ChatGTP to look cool and no-one considered that these two parts could be combined? This was relatively simple in 1992, now with an evolved Oracle and Snowflake it becomes mere Childs play and the data centres to capture the essence of 8,000,000,000 people is already out there. So where will you end up getting selected under? Because in this setting you do not get to have a choice. It is what governments and their spreadsheets and revenue driving numbers say you are to be. It is basically that simple.

So whilst you think you are doing the fool thing, others can salvage a lot more data out of that setting than places like ChatGPT can vouch for and remember, the Cloud Act 2018 we are told “to improve procedures for both foreign and US investigators to obtain access to vital electronic information held by service providers.” And in this case, anything that helps the US investigators is valid for capture and whatever that is is not precisely defined and whilst we think we are safe, we really are not and every ‘cool’ AI (merely NIP) is based on getting as much data as they can whilst giving you the option to look cool and there is nothing uncool about a caricature of yourself.  The fact that hundreds of these are floating around LinkedIn is reason enough to see that and when the second stage starts (basically American companies selectively poaching) and that is when governments finally realise that they all fell for the trap that was there next to phishing and data transfers and they let it all happen. 

So when you see the SBS article, fear the setting that they give “As well as extensive biometric data, including DNA, the proposal requests that inbound travelers to the US provide five years of social media history, five years of personal and work contact details, extensive personal information on family members, and even the IP address and metadata of any photos uploaded as part of their application. So far, the United Kingdom has signed onto the agreement, and the European Union is in negotiations.” Do you really think that this is needed to keep the United States safe, or is there more in play? The fact that the UK signed it is as I see it stupid beyond believe and this comes from the nation that seemingly holds ‘freedom of speech’ in such high regards.

Have a great day today, because as I see it, some governments are selling you out as you speak.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

Cracks in the armour

That is at times the stage we see. It is not a stage where the we are concerned of the armour that is in play. It is like any soldier wanting the direct replacement of body armour when it stops a bullet. There is no logic in this. It is like the expectation that a bullet strikes perfectly the first impact. You might be more lucky to get a winning lottery ticket. So when I saw the Financial Times headline (the article is behind a paywall) we would have seen

The headline is ‘alarming’ as the banks seek out new buyers for data centre loans. But as I see it, Oracle has been in the thick of things for over 40 years and the current boss of Oracle is currently worth 250,000 million dollars. He basically is worth more than most board of directors of any bank in the United States. So the setting doesn’t make sense to me. This seemingly happens should Larry Ellison (father of David Ellison, big boss, actor, producer, chairman and CEO of Paramount Skydance) takes an equal disastrous dive. You think that this is ‘boasting’ but the setting that we see here gives us that banks are in a downward spin and the Ellison family is well insulated of the impeding downward spiral. So here we go to the next article and we get ‘Oracle issues public clarification amid reports linking AI push to job cuts’ (at https://sea.peoplemattersglobal.com/news/strategic-hr/oracle-issues-public-clarification-amid-reports-linking-ai-push-to-job-cuts-48277) where we see “In a statement posted on its official X account, Oracle said a widely discussed Nvidia–OpenAI investment proposal had “zero impact” on its financial relationship with OpenAI and insisted it remained “highly confident” in OpenAI’s ability to raise capital and meet its commitments. The clarification followed mounting speculation that Oracle could slash as many as 30,000 jobs to help fund its AI expansion.” I am not taking sides here, but as I see it, at least 5,000 employees could find a job by opening two cloud centres. One in Saudi Arabia and one in the UAE. Techies, Trainers, consultants and that could be an influence of revenue out of those two countries. So when we see “The statement came after a turbulent weekend for companies tied to OpenAI. The Wall Street Journal reported that a proposed $100 billion Nvidia investment in OpenAI had stalled and was never finalised. Nvidia chief executive Jensen Huang later confirmed that the arrangement discussed last year was non-binding and did not proceed. Despite Oracle’s attempt to reassure investors, markets reacted negatively. The company’s shares fell 2.79% to $160.06 shortly after the statement was published, highlighting ongoing concern about the scale of Oracle’s financial exposure to the AI build-out.” I have a speculative arbitrary subjective view of Sam Altman (OpenAI) that he is nothing more than a lousy second hand car dealer with too big an ego. And the setting where they are ‘closing down’ the 100 billion dollar deal sounds alarming and it seems like Oracle is left with the mess of something that is in a downward spin and continues falling downward until it splatters with a sickening thump. And when we get to “Oracle’s debt burden has expanded rapidly. The company has added about $58 billion in debt in recent months, largely to finance new data centre campuses in the US, pushing total debt above $100 billion, according to analysts. Since peaking in September 2025, Oracle’s market capitalisation has fallen sharply, erasing hundreds of billions of dollars in value.” All whilst OpenAI couldn’t exist without the Oracle framework and whilst we are given all kinds of complications but there are two settings no one seems to care about. There are plenty of reasons to have a data centre, but AI doesn’t exist yet and Deeper Machine Learning (DML) and Large Language Models (LLM) do exist and they are close to magnificent, the issue is that everyone is going with the AI setting and this AI just cannot do what AI needs to be able to do and whilst we see some excellent ideas, as I see it it doesn’t give the structural settings of an additional 770 data centres are in the making and the resources that are required are rising to the spotlight and people are unhappy with it all. All this is making OpenAI (Sam Altman) rather uneasy and whilst some are shutting down $100 billion deals whilst shouting that the processors aren’t good enough and whilst Google Gemini is outperforming whatever OpenAI has and now the banks are getting jittery and the pressure gets onto the house of Oracle. I can call it that because the Pythia of Delphi gave me permission herself. So now that the bottom of the well is showing the banks go medieval on whatever they can and they try to go out from under their arrangement. Sounds like the setting banks had in 2008, doesn’t it?

But to feed an excellent software firm to the wolves to keep safe is not the good setting. As I see it Oracle will come up from all this, whilst they will stop working with certain banks as I see it. And those banks will cry like little bitches stating that it was just business (a speculative view I am holding). And all whilst I wasn’t stating anything new. This was out in the open for over 2 years. As such the banks and the media have a few thing to explain to the people and they aren’t in the mod for what some will call BS.

Have a great day today, don’t forget to have some Ice Coffee if you are in a 30 degrees plus environment (like me) and feel free to ask the media all kinds of nasty questions. 

Leave a comment

Filed under Finance, IT, Media, Science

Excuse towards failure

It is an old expression and I didn’t expect to hear this again, but there you have it. To give reference. In the 90’s sales teams were all about the ‘pipeline’ and making ‘quota’ but at times the bosses of these sales teams didn’t have the right glasses on and they would overcompensate in many ways making life close to impossible for the sales teams. Now we get CEO’s and other ‘things’ needing to do the same thing towards shareholders and that is where the story starts. Reuters gives us ‘OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say’ and we see (at https://www.reuters.com/business/openai-is-unsatisfied-with-some-nvidia-chips-looking-alternatives-sources-say-2026-02-02/) that the setting is pretty much what I expect. As we are given “OpenAI is unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom.” As I see it, Sam Altman and his OpenAI aren’t making things happen and to thwart things as much the blame game comes into play. He has no other option, he is the top of the mountain and that means that he is subject to shareholders and the story “the chips aren’t cutting it” is as good as it gets for him. I reckon that the “sought alternatives since last year” excuse is about gaining time. But take a look at what Nvidia achieved. 

So, where are the shortcomings? Are the expectations of Same Altman realistic? And who are the 8 sources that Reuters is referring to? So when September came, some were given “Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips.

The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD and others for GPUs built to rival Nvidia’s. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said.” This now gives pause to consider if it is merely the hardware, or the slice that OpenAI gets from it all and why go for the inferior AMD chip? Because if OpenAI claims that it is superior or even equal to Nvidia, the press better get that lowdown, because as far as I can tell there is no western equals to Nvidia (optionally the Huawei chip, but that is an assumption by me, myself and I). 

So when we get “On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was “nonsense” and that Nvidia planned a huge investment in OpenAI.

“Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale,” Nvidia said in a statement. A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference” the simple setting is even that OpenAI Marketing is not one of those 8 sources. As such, if we cannot get clear information, could someone please alert these shareholders that OpenAI is making an optional training run with their money? 

As I personally see it, Sam Altman is coming up short for meeting expectations, especially as he is  trying to catch up with Google’s Gemini. I reckon that this will give him nightmares too. But overall the setting is one I expected to come, because in the end AI doesn’t yet exist and now that 100% of that hardware vendors are intentionally wrongfully label their chips AI (they’ll call it ‘Alternative  Intelligence’ at some point) and that is when the class cases will plaster every courthouse from Alberta to Zurich and I reckon it will not take that much longer, especially when the excuse that the chips aren’t good enough are coming out. I might have believed them if it was the Adler chip (a 80186 joke), but it is Nvidia, the hardware darling of the IT world.

As such my skepticism overtakes my feeling of fairness and openminded justice (that being said, justice is almost never openminded) but do not take my word on this, ask the OpenAI program with all that AI in play. 

So time for some ZZZZZZ’s, you all have a great day. I am ready to snore mine away.

Leave a comment

Filed under Finance, IT, Media, Science

Fear of being right

That is what I face at times. I get that my ‘idea’ of safety is a little overdrawn, but I have seen the stupidity of greed driven and how those seeking the stupid and greedy are willing to exploit that. I am of course referring to the really organised criminals (criminals with Filofaxes). That is the expected setting and on February 11th 2024 I wrote ‘Don’t take my word’ (at https://lawlordtobe.com/2024/02/11/dont-take-my-word/) I was considering the danger that a place like Funnel was presenting itself to be. And the presented advertising (a lot of it on LinkedIn)

showed a setting that I feared and guess what? I was partially right. I was right because that side was exploited and I was wrong as it was not Funnel who gave the setting. It was a place called Mixpanel where we see “more than 200 million premium users that their data may have been exposed when hackers breached third-party analytics provider Mixpanel” and last month we were given ‘Data breach at OpenAI through analytics provider Mixpanel platform’, which was seen (at https://securitybrief.com.au/story/data-breach-at-openai-through-analytics-provider-mixpanel-platform) you can wallow as much as you like that I was wrong, but that another platform provider is the first to fall, does not mean that I was wrong. The setting of ‘ease’ safety which they called “Hey marketer, tired of wasting time downloading and cleaning data from all your advertising platforms? It’s time to meet

Funnel. Save time, improve performance, get better insights with Funnel.” As I personally see it ‘tired of downloading’ should be seen as ‘safety towards your data’ and “cleaning data” often implies “validating and verifying the data you are using”, so if there are people that are thinking I am a proverbial shit bucket, consider the image below.

Where we see that in the proverbial instant. That resulted in the loss of some “200 million users have data and search history stolen” and yes, the 200 million records could see the setting that these 200,000,000 million users will get phased and the companies they optionally worked for too. That is the larger setting of being lazy, or being contemplated towards the security they never really had. Why did they not have that security? Because certain settings negate safeties that are and as I see it, Mixpanel who by the opinion of some is seen as “a product analytics platform that helps businesses track user interactions on their websites and apps to understand behavior, improve products, and drive growth” and as I see it, it is driving growth for the really organised criminals and now as we see (at https://securitybrief.com.au/story/data-breach-at-openai-through-analytics-provider-mixpanel-platform) we are given “The incident was related to unauthorised access to a dataset within Mixpanel’s systems. OpenAI reported that an attacker exported data containing certain identifiable information of API account users. Details potentially exposed included names provided on API accounts, email addresses, approximate location information, operating system and browser details, referring websites, and the organisation or user IDs linked to the API accounts. OpenAI emphasised that no chat logs, API requests, passwords, keys, payment details or sensitive identification documents were accessed. The data breach affected only information collected for analytics purposes through Mixpanel.” I get that this is the OpenAi answer, but it seems shallow, short, and perhaps that is all it is, but there is a second setting. Either the ‘provider’ who sounds like Promohub is giving us a larger pool of users, or some clever person might be insightful enough to combine the data of two pools of data and see what could be linked, because any person whose ‘shortcomings’ are exposed will seek other ways to hide the ‘shortfall’ and that is exactly what criminals are banking on. OK, this is speculation but if I had these two pools of data, I the first thing I would do is to seek a common ground (like an email address) and see what else I can find. This is how I found the weakness towards the Pentagon using the HOP+1 solution (which is wrongly analyzed by what some call AI) it was the first thing I did last month. And now again I am right. To be clear, the article on Funnel was about Funnel and as far as I know it was never transgressed upon. It was merely a fear I held and the fear was shown correctly at Mixpanel, not Funnel.

So whilst OpenAI correctly gives us “Information potentially accessed through Mixpanel may expose users to an increased risk of phishing or social engineering attempts.

Names, email addresses, and user identifiers were among the details exposed. OpenAI has advised all customers and users to remain vigilant for any suspicious or unsolicited communications that could be related to this incident. The company reiterated that it does not request sensitive information such as passwords, API keys, or verification codes via email, text, or chat. Users have also been encouraged to enable multi-factor authentication as an additional protective measure for their accounts.

And why am I now up in arms? Because I got the word through another source relating to another vendor and that implies that there are at least three data sources exposed and those with connected data will be at risk. As such there is little risk for OpenAI and its users if it is used correctly, but when is that the case and it falls back on the users, not on OpenAI. There is an old premise that I usually phrase. If 5 vendors have a 10% loss, the customer is at risk of losing 50% and that is what the danger is here. And when this is applied to 200,000,000 users, the losses could be close to astronomical. 

Now we can argue that there is no such risk, but that answer is coming mostly from people claiming to have no P#Hub account. Do they? I cannot tell, but they know if they have or not. And to also be clear, there is absolutely nothing wrong with having multi-factor authentication on any account you have. Those people are as I personally see it the least in danger.  But that is the setting that we are avoiding to look at. As I have said (way too often) that nonrepudiation is the way to go is showing to be the correct setting yet again. 

Have a great day all, only 11 hours until Friday, or in Hobbit terms Frododay, the day you have two breakfasts and three lunches until the beer o clock chimes.

Leave a comment

Filed under IT, Media, Science

The wannabe influencer?

That is my question at present. In comes a person with the ludicrous title of “Al & loT Expert”. You see, what makes it hilarious was the post I saw ‘fly’ by. He starts off with “OpenAl’s first hardware is… a pen?? (If they don’t call it O-Pen Al they have officially lost the Al race).” So that is what makes him an expert? I am no expert on any of that but I am highly knowledgable on matters including IoT. In some cases and in some places I am known as a guru. I have my niche settings. But what gets to me is that (although I am no OpenAI fan) OpenAI has ‘Yes’ lost the current battle against Google and its Gemini 3, which the media kept from you for weeks. Although I personally never used it, but people who did and are ‘regarded’ as captains of industry think so. So, as I see it, OpenAI lost a battle, but that doesn’t mean the war is over. You see, the war on AI (when it finally comes here) is in no means settled at present. And those who understand that battle know this and mostly unmentioned is the play that is left with IBM because they currently have the inside track, not Oracle, not Snowflake and definitely not Google, Microsoft or Amazon. You see, AI is more then what is out there today. It will rely on larger technological settings. They all have quantum systems, but who is the most advanced in Shallow Circuits? IBM was setting that stage in advanced settings in 2017 all whilst OpenAI hardly barely at that point. IBM was on the ball and the actual winner of what now is referred to as True AI, which is ACTUAL AI will need two additional settings the first is Shallow Circuits, a setting where only IBM is a straight forward contender. With that I say I have no idea where Google stands. And in that the next thing is that a trinary operating system will be required and as far as I know there is no current winner at present. I reckon that both Google and IBM have dabbled in this, but I do not know where they stand and when this comes to pass the winner will work with Oracle to make the connections in a much needed combined effort, because they all agree that Oracle is the one player that can make it work. Snowflake as well, but I have no idea where they stand in all this. What we currently have are DML/LLM solutions that are at times clever and functioning, but in too limited a setting. I call this Near Intelligent Parsing (or NIP), but it is not AI, even thought they all have the marketing calling it so. 

What we have now is a mere shadow of what Alan Turing envisioned half a century ago and leave it to sales teams to wriggle the straw until it bleed revenue, but as the class cases will explode in this year, they are left to ‘apologetically assume the position of miscommunication’, at least that is how I see it. So was this person a wannabe influencer and taking the LinkedIn cloud by humor? 

So this might optionally have been the pen that OpenAI is flaunting, but as I see it, this is their step into audio, which they advertised and having a pen recorder is a pretty contraption (aka gizmo, doohickey, or thingamajig) that propels the setting of OpenAI forward. And I reckon that within a month all wannabe AI experts want one. Audio is the next stage that require harnessing, so OpenAI is not out of the race, they merely got bruised in a race where they had the upper hand for three years. 

Perhaps they get the upper hand in other direction making them overall winner, but that is a mere consideration of option, especially when we realise the inside track that IBM has and where is that in his assessment? So I am not proclaiming the identity of that person, it lacks class and makes him a target. He made himself a target and I do not need to add to his current confusion. 

What is a stage is that there is a chance that OpenAI is moving to capture the stage of Audio enhanced NIP (Near Intelligent Parsing) making them first again and Google will need to play catchup, optionally Oracle (Snowflake too) will now have to adjust their tracks to get audio embedded in their database settings and whilst we do not know where IBM goes, we do know they have the inside track, they might rely on Oracle/Snowflake solving that problem for them and as I am a Snowflake person, I still believe that Oracle is likely to win this war for the mere knowledge that they have been on these tracks long before Snowflake got involved, so they have years and traction in their stride. This is not a certainty, but a presumed advantage. 

That is as good as I can give it to you and I have written other stories on the need for a Trinary operating system. I last did that in ‘Is it a public service’ which I wrote last November (at https://lawlordtobe.com/2024/11/16/is-it-a-public-service/) so this isn’t coming out of the left field, it was there for almost two months. Oh and to be certain that you do not mistake me for that wannabe influencer. I am in no way an ‘expert’ on AI, I merely have been dabbling in IT and data since 1981. So I have the mileage here, have a great day today.

Leave a comment

Filed under IT, Media, Science

The boat has left

That is a weird setting, but that might be the case for a lot of people. It is the Financial Express who gives us (at https://www.financialexpress.com/life/technology-ibm-to-skill-5-million-indian-youth-in-ai-cybersecurity-and-quantum-computing-by-2030-details-inside-4082018/) the headline ‘IBM to skill 5 million Indian youth in AI, cybersecurity and quantum computing by 2030’ you might think it is nothing to get hung over about, but you would be wrong. Even as some ‘claim’ to give good courses (some actually do), it is IBM who has had that inside track in several ways. As such (or perhaps to consider as I see it), the labour market will be drowning in Indian entrepreneurs by 2032 (and a whole before that). I reckon that these people will bolster the Indian go getter market and they will branch out to Saudi Arabia, the UAE and a few other places. As such if you think the US labour market is merely cooling, think again. These people will be highly wanted in India, Saudi Arabia, the Emirates, UK, Australia, Canada and the EU long before we get to 2030. There will be an Indian wave of go getters all over the world and the places that needed to get active weren’t for much too long. So as we see “India possesses the talent and ambition to lead the world in AI & Quantum. Fluency in frontier technologies will define economic competitiveness, scientific progress and societal transformation,” said Arvind Krishna, IBM Chairman, President and Chief Executive Officer. “Our commitment to skill five million people is an investment in that future. By democratizing access to advanced skills, we are enabling the youth and students to build, innovate and accelerate India’s growth.”” And these people will be highly skilled in all things IBM (perhaps not in IBM Statistics or IBM Miner) but that is little cause for alarm. These people will also bring forth IBM skills and products, so this setting takes care of two pipelines, skills and products. And all that time AWS was hounding the AI field. It is nice, but as these people are highly skilled in whatever IBM holds, there is a mismatch on what is required. OK, that last part is speculative, but that is what I would do.

I reckon that Microsoft and OpenAI also might have a problem here. You see we also get “IBM also continues to strengthen school-level readiness by co-developing the AI curriculum for senior secondary students, along with teaching resources including the AI Project Cookbook, Teacher Handbook and explainer modules. These programs are designed to embed computational thinking and responsible AI principles early, while enabling teachers to deliver AI education confidently and at scale.” As such these people get a schooling in evolved from famous systems like Deep Blue and Watson and as such IBM provides a flexible ecosystem allowing choice from various foundation models (like Granite, Llama, Mistral). Whatever they partnered with doesn’t matter. This is the IBM show, partners take a second stage chair. And as I see it, IBM did something nicely spectacular because they get a choir of 5 million evangelizing Watsoners all over the world and in that instance Watson grows from niche to mainstream and that will feel good for all the shareholders who kept their trust in Arvind Krishna (I will give a nice ‘Well done sir’) in this instance. Because it is starting to look like the old premise ‘When two dogs fight over a bone, the third one takes it gone’ So in the fight we saw with OpenAI and Google, we now see that the future is banked on by IBM. This doesn’t make the others useless in any way, but IBM set the future towards Watson in a rather nice way and that has to count for something.

What a nice end of year this will be this year. Because at the drop of a hat, it wasn’t merely Google or OpenAI, as I see it now IBM because the third major player in this duet and as I see innovation, this is how innovative strides are made, by having to refocus your tasks, that is the real innovation maker in this world. 

A lovely ending to Christmas Day. Have a great upcoming boxing day you all.

Leave a comment

Filed under IT, Science

They had twins

Yup, it happens. At times we have kids, progeny so to speak and some get two for a simple roll in the hay. Yet this isn’t about kids. It is about Gemini 3, Googles seemingly finest product. It is so great that Microsoft barred Google Chrome from installing and they blamed it on some weird parenting setting. And then the media lacked looking at it, probably some revenue driven courtesan issue. All speculation, but I would prefer to set this to presumption, still I have no evidence. So it is all allegedly, but the settings on Gemini are clear. I read it myself (so it must be true). I will start with FXLeaders who (at https://www.fxleaders.com/news/2025/12/23/google-stock-heads-to-record-highs-as-gemini-3-outperforms-chatgpt/) gives us ‘Google Stock Heads to Record Highs as Gemini 3 Outperforms ChatGPT’, as such it is now the fifth time Microsoft loses. There was Sony, There was AWS, There was Google and now there is Google again. It sucks to be Microsoft. And the howling continues. 

So FXLeaders gives us two bullets that matters.

So as we are given “Alphabet emerged as one of the standout megacap performers in November, delivering a decisive breakout that carried shares through the $300 mark and to a fresh all-time high near $329. The move completed a strong rebound from a late-September pullback and reinforced confidence in the company’s long-term growth trajectory. The rally was fueled by sustained institutional demand and growing optimism around Google’s artificial intelligence roadmap. For much of October and November, Alphabet benefited from its unique position at the intersection of digital advertising dominance and AI platform leadership.

As well as “The rollout of Gemini 3—trained primarily on Google’s in-house chips rather than external hardware—has sparked renewed debate around vertical integration in artificial intelligence. Supporters view this as a long-term strategic advantage, potentially lowering costs and reducing reliance on third-party suppliers while optimizing performance. Recent benchmark results, where Gemini 3 reportedly outperformed ChatGPT in several categories, have added to that narrative and intensified competitive pressure across the sector.” So wonder about how the media could not get you this two weeks ago and wonder now why I refer to the media (the larger part) as the Courtesans of the digital dollar. This should have been know and tested for by several parties directly, and I don’t care who won, we were not informed. As I see it, Microsoft has too powerful a hold on the media and the media who shunned their jobs need to be named and shamed. Sound simple, doesn’t it? As such I also present a second source, so there is a little more data drivenness to the fold. It is a story (at https://www.startuphub.ai/ai-news/ai-research/2025/google-gemini-3-redefines-ai-reasoning-and-efficiency/) where StartupHub.AI gives us “The core of Gemini 3’s impact lies in its unprecedented reasoning and multimodal understanding. According to the announcement, Gemini 3 Pro, Google’s most powerful model to date, not only topped the LMArena Leaderboard but also achieved breakthrough scores on challenging benchmarks like Humanity’s Last Exam and GPQA Diamond. These tests are designed to assess an AI’s ability to truly think and reason like humans, indicating a sophisticated capacity to process and synthesize information across various modalities, moving closer to genuine comprehension. Furthermore, its gold-medal standard performance in international mathematics and coding contests, powered by its Deep Think capabilities, signals a new era for AI in complex problem-solving, pushing the boundaries of what automated systems can achieve in abstract domains.” So as we wonder what some of them mean, the benchmarks were available to pretty much all the media, so what prevented them to report on it? Simple question, isn’t it?

And you might wonder why I care, or why I believe these sources. There is a setting that sets up a lot of consideration and that is right, but the media isn’t informing us and they aren’t making any tests, even though I gave one test to the world (not necessarily a good one) but the media did NOTHING. They allegedly value the digital dollars too much and they rely on players like the Microsoft stakeholders to fund their gravy train (as I personally see it) So am I right, am I wrong? I would love to be wrong, but I have seen this before (more than once). But as I see these results there is a larger play in motion. Is Google actually that good? I am not debating it, I am asking and it comes with an answer. It is either Yes, No, or it is under advisement. The first two are simple and it can begotten by showing the evidence, but the Media did nothing of the sort, perhaps some did, but the larger groups are abstaining from involvement (it sounds better then ‘They cower the results if involved’ because that makes them sound like actual pussies. So why am I so angry about this? It is a result we were entitled to and it requires OpenAI to divulge its heading and not cater to asking for more value when there is none to be had (at present). And as such investors are duped into not receiving the evidence they need to make financial decisions. But perhaps I am over simplifying the problem here.

Whatever you consider and whatever you decide is yours to do and you are entitled to the best information to make these decisions and the media is no longer able to do that. I don’t care if you embrace ChatGPT and OpenAI. That’s fine, I am not choosing favorites, I actually don’t care, but I do care about lacking media, lacking results and hiding behind some stakeholder whilst the people have a right to know. They use that as their battle drum, so they can be held to that as well. It is a simple setting as I see it.

Have a great Christmas Day, 23 hours until boxing day for me.

Leave a comment

Filed under Finance, IT, Media, Science

The dice fell snake eyes

It is the setting I predict a few weeks ago and more less recent in the story ‘Eric Winter is a god’ (at https://lawlordtobe.com/2023/07/05/eric-winter-is-a-god/) in July 2023. I saw it coming this early in the race, why? Mainly because AI doesn’t yet exist, so whomever sells whatever solution they have as AI will set themselves up for a rather huge and nasty fall. In 2023 it was easy, in 1980 the movie the Changeling was released, giving the timelines then, the movie was made in 1979 and Eric Winter was born 17 July 1976, so what was a 2 year old doing in that movie? That is the simple setting of validating your data and that is why there is a case with what some now call AI. So now we get (at https://decrypt.co/353227/openai-microsoft-sued-over-chatgpt-connecticut-murder-suicide) ‘OpenAI, Microsoft Sued Over ChatGPT’s Alleged Role in Connecticut Murder-Suicide’ so when we see the setting in that case, there is more than just the bare minimums. This will imply engineers who programmed the setting, as we are given “In the latest lawsuit targeting AI developer OpenAI, the estate of an 83-year-old Connecticut woman sued the ChatGPT developer and Microsoft, alleging that the chatbot validated delusional beliefs that preceded a murder-suicide—marking the first case to link an AI system to a homicide.” I expected that we would have until 2026, but it never got that far and when the first trial starts, we will see aq whole range of class actions and other legal battles start, because as we are taught in Torts, go where the money is and OpenAI/Microsoft have plenty. As such there will be a whole range of cases being started. I reckon that there is a whole flock of ambulance chasers who will see this as their golden opportunity. And the more data is thrown around, the more intense the legal battles begin to emerge. A setting that was clear two years ago for me and as I found more than one setting that favors this, we merely have to look at sentences like “We rely on our AI to bring you [X]” the legal eagles see that as their way into your coffers and they have greedy hands, because that is what they were instructed to do. And when you consider “OpenAI faces numerous lawsuits, primarily revolving around copyright infringement for using vast amounts of online content (news, books, lyrics) to train AI models like ChatGPT, with major cases from The New York Times (NYT) and authors seeking damages and content bans, plus a recent German court ruling against lyric reproduction.” We see the setting that they either settle, or lose whatever data they have and there are numerous other settings that are thrown into the mix. And whatever is in the design law database, because there is every indication that these trademarks were also broken in numerous places and Microsoft has no place to turn, they are in it for the big bucks and whilst some are ‘driven’ to reconsider their options, the amount of people who are not considering that, is a growing amount of people smelling the scent of dollars and they are hungry. I reckon that those non-Americans are even more driven to those dollars than the Americans are. It comes down to (a massive speculation) that gets them up to 100 billion and that was before Sam Altman was hoping for a $800B incentive. That is the short and sweet of it, so as we look at the article seeing

“This is the first case seeking to hold OpenAI accountable for causing violence to a third-party,” J. Eli Wade-Scott, managing partner of Edelson PC, who represents the Adams estate, told Decrypt. “We also represent the family of Adam Raine, who tragically ended his own life this year, but this is the first case that will hold OpenAI accountable for pushing someone toward harming another person.”” You see, “first case that will hold OpenAI accountable for pushing someone toward harming another person” is a deeper step than some lawyer pushing that OpenAI was driving a person to some extend, that is no harm, or merely applied harm to self, do you have any idea how many lawyers will demand to see the algorithm and the programmer who wrote it? That will be a mess that takes almost years to sort out, in that same time, Google will progress Gemini 3 much further making OpenAI lose investors and they are as sketchy as they will ever be.

So whilst we see the sparks come, we will see a lot more issues surface and they are not all on OpenAI, but I reckon that some lawyers will play it that way, because that is where the money is. 

So you all have a great day, it is still 39 degrees in my living room so I am placing my mattress in the freezer, not sure how, but I need to get some sleep at this point.

Leave a comment

Filed under Finance, IT, Law, Media, Science

With media assistance

That is what I see and I might be wrong, but judge for yourself. There is plenty of evidence around. It all started with an article in Forbes (at https://www.forbes.com/sites/zakdoffman/2025/12/18/microsoft-updates-windows-to-stop-users-downloading-google-chrome/) where we were shown ‘Microsoft Updates Windows ‘To Stop Users From Downloading Google Chrome’’ so that doesn’t sound at all ominous. And it kinda reflects the setting I gave over 2 years ago with ‘Are they really?’ (at https://lawlordtobe.com/2023/09/01/are-they-really/) which I gave onboard September 1st 2023. We are now given “Here we go again. “Microsoft is trying a new way to stop users from downloading Google Chrome.” We have seen this before. Just as with Apple, the two tech giants are pushing hard to keep users within their own walled gardens, on Safari and Edge. The latest news comes from Windows Report. “If you open the Chrome download page in Microsoft Edge, you may see a new banner at the top.” Instead of just presenting the usual Edge versus Chrome comparison, “Microsoft now focuses on protection.”” I would be the first to state that the statement is missing and they are actually meaning “Microsoft now focuses on protection of self” and it is a slippery slope. They can find the expert in France to find evidence that the bra size of Kim Kardassian is increasing, but they are not able to get a clear independent view of whatever OpenAI gives us against Gemini 3? Go Figure.

As such Forbes gives us “What’s most interesting is that Microsoft has usually stressed that Edge is built on the same Chromium base as Chrome, with all the benefits of Chrome, only better. “This time, those points are missing. The message stays centered on built-in safety features.”” Og course it is, Microsoft cannot allow for the people to gives them grounds of taking sides in that war, they have far too much riding on out, the revitalization of Clippy is on the line and if people (who are speculatively likely) to select Gemini 3 over OpenAI, the walls of Microsoft come crumbling down. They have trillions riding on this and as I see it (I have zero evidence) is that OpenAI underwhelmed whist Google is riding high, as such they have trillions riding on their bad sense of innovation.

And as I see it, it is really bad when they are repeating some of the settings they had in 2023 when edge was on the line, I reckon together with Xbox and Gemini they now lost for the third time, four times if you count AWS versus Azure. The once so highly Microsoft has now lost against Android, Google Search, Sony, Amazon and now against Gemini. A five times loser of technology. So whilst the media ‘accepts’ “Microsoft now focuses on protection.” The truth is predominantly ugly, the truth is that Microsoft is basically done for. 

And the media can hide behind their timelines when they ‘suddenly’ reveal an independent tester (one that meets with the approval of Microsoft) But it might be too little and too late for the media as well. 

So whilst Microsoft hides behind “Chrome attracts more security threat headlines than any other browser. This year, Cybersecurity News says, “Google addressed a significant wave of actively exploited zero-day vulnerabilities affecting its Chrome browser, patching a total of eight critical flaws that threatened billions of users worldwide.”

All these vulnerabilities were “high severity with CVSS scores averaging 8.5,” with the world’s most popular browser targeted “by sophisticated threat actors, including state-sponsored groups and commercial surveillance vendors.”” And weirdly enough, my Android is flying high using Google, the only threat I had for a while was influencers pushing me against my will towards Edge. As such there might be truth in the last statement, but I think Microsoft is overselling that idea. And as the evidence s shown to us, I really believe I am right all along. So as you might realise that Forbes hides behind their final words “As you see, none of this is clear cut.” I believe it is and it requires a true independent test of Gemini versus OpenAI. But perhaps I am oversimplifying the problem. I apparently tend to do that and it has nothing to do that I have been in IT for over 45 years. So you all have a great day, I finally look forward to some sleep. The temperature has dropped from over 30 degrees to 24 degrees and it is 02:00. And did you catch the one element Microsoft is leaving alone? It is that Apple is less of a threat than Google is, is it the 26 profiles of their Alphabet? I let you decide. I have seen the light and the seas of snores are beckoning me.

Leave a comment

Filed under Finance, IT, Media, Science