Tag Archives: US Cloud Act

When Grok gets it wrong

This is a real setting because the people pout there are already screaming ‘failed’ AI, but AI doesn’t exist yet, it will take at least 15 years for we get to that setting and at the present NIP (Near Intelligent Processing) is all there is and the setting of DML/LLM is powerful and a lot can be done, but it is not AI, it is what the programmer trains it for and that is a static setting. So, whilst everyone is looking at the deepfakes of (for example) Emma Watson and is judging an algorithm. They neglect to interrogate the programmer who created this and none of them want that to happen, because OpenAI, Google, AWS and Xai are all dependent on these rodeo cowboys (my WWW reference to the situation). So where does it end? Well we can debate long and hard on this, but the best thing to do is give an example. Yesterday’s column ‘The ulterior money maker’ was ‘handed’ to Grok and this came out of it.

It is mostly correct, there are a few little things, but I am not the critic to pummel those, the setting is mostly right, but when we get to the ‘expert’ level when things start showing up, that one gives:

Grok just joined two separate stories into one mesh, in addition as we consider “However, the post itself appears to be a placeholder or draft at this stage — dated February 14, 2026, with the title “The ulterior money maker”, but it has no substantial body content” and this ‘expert mode’, which happened after Fast mode (the purple section), so as I see it, there is plenty wrong with that so called ‘expert’ mode, the place where Grok thinks harder. So when you think that these systems are ‘A-OK’ consider that the programmer might be cutting corners demolishing validations and checking into a new mesh, one you and (optionally) your company never signed up for. Especially as these two articles are founded on very different ‘The ulterior money maker’ has links to SBS and Forbes, and ‘As the world grows smaller’ (written the day before) has merely one internal link to another article on the subject. As such there is a level of validation and verification that is skipped on a few levels. And that is your upcoming handle on data integrity?

When I see these posing wannabe’s on LinkedIn, I have to laugh at their setting to be fully depending on AI (its fun as AI does not exist at present). 

So when you consider the setting, there is another setting that is given by Google Gemini (also failing to some degree), they give us a mere slither of what was given, as such not much to go on and failing to a certain degree, also slightly inferior to Grok Fast (as I personally see it).

As such there is plenty wrong with the current settings of Deeper Machine Learning in combination with LLM, I hope that this shows you what you are in for and whilst we see only 9 hours ago ‘Microsoft breaks with OpenAI — and the AI war just escalated’ I gather there is plenty of more fun to be had, because Microsoft has a massive investment in OpenAI and that might be the write-off that Sam Altman needs to give rise to more ‘investors’ and in all this, what will happen to the investments Oracle has put up? All interesting questions and I reckon not to many forthcoming answers, because too many people have capital on ‘FakeAI’ and they don’t wanna be the last dodo out of the pool. 

Have a great day.

Leave a comment

Filed under IT, Media, Science