I was battling what to write about and there was Elon Musk giving me a perfectly good reason right of the bat. Well, it wasn’t Elon who gave me the idea, it was his product Grok. I have always said that AI is not real because of the missing parts, and it comes with a few constraints by certain (so called) captains of industry who are lacking in several ways. It is also connected to some other things I do. You see, no matter how you come, how much you innovate the idea, you will end up with a mere 0.1%-1% of the true value of the product. Todays ‘captains’ are utterly set into the exploitation of everything they see. As such I put it on my blog. When my stuff is in the open they cannot really claim any innovation. You see the IP is no longer protected by intellectual property laws, and the public is free to use, share, and build upon these works without seeking permission from the original creator. I might get something out of it but for the most I get the satisfaction that these ‘captains’ see the loss of an idea towards everyone. If I am unable to get something out of it, it will become Public Domain and perhaps it will spread my fame in that way. Some will smile at this and call me stupid (or a fool) but I am out of their reach for exploitation. As I see it, I gave the world over a dozen options for enrichment and in this way the Indie developers get a leg up without fear that a larger player will cut them out. Small comfort. But that is what is.
So, whilst I diverted, it was for a reason. You see the AI of now is fake AI (at best), all of them are because the two elements missing are evolved versions of Shallow circuits, as stated (for as far as I know) IBM has the strongest version of this, but still another system is required, a trinary operating system. Binary will not do for AI, the setting of Null, False, True and both is required for a true AI to come and no-one has that yet. A dutch physician got the Epsilon particle made (or found), this was going to be instrumental and to evolve this in an IT setting (most likely through yet undetermined means), but I digress, what I believe to be a weakness, doesn’t make it true. Alternative evidence is needed and I found it a few times over, but in this case I will revert to my last story ‘As oil burns’ which I published on May 4th, 2026 at 12:33. About an hour later I used Grok to look at my story. The first view after an hour was:
Lawrence van Rijn’s post links to his new blog article “As Oil Burns” on lawlordtobe.com, discussing surging oil prices triggered by Iran’s closure of the Strait of Hormuz amid escalating US-Iran and regional tensions. The author, a postgraduate IT student and Master of IP Law candidate at UTS, regularly analyzes geopolitical issues like oil production shifts, UAE’s OPEC exit, Iranian attacks, and US military deployments in his personal blog. As of May 4, 2026, the X post shows very low visibility with only 4 views and zero interactions, typical of his niche personal commentary on current events.
This is what AI does? Is that really a view on what I wrote on: https://lawlordtobe.com/2026/05/04/as-oil-burns/
A story containing 986 words with more than 523 words (which is 54%) on Russia, the top line gives zero consideration on Russia, it gave me another thought, but Ill get to that later. The second view (on the same text) was after 6 hours and there we see:
Lawrence van Rijn’s X post shares a link to his own blog article titled “As oil burns,” published May 4, 2026, focusing on Ukrainian strikes against Russian oil tankers and infrastructure during the ongoing Russia-Ukraine conflict. The article cites a BBC report on Zelensky’s claims of attacks damaging three shadow fleet tankers, a warship, and a patrol boat, estimating 300,000-500,000 barrels per day reduction in Russian oil transshipments and over $7 billion in total losses from such strikes. It highlights Russia’s high military losses including over 11,900 tanks and 1.3 million soldiers, diminishing oil revenues forcing production cuts, and predicts further geopolitical isolation with limited export options remaining.
So what AI requires 6 hours to give better show of the same text? So, is my view of ‘Fake AI’ still wrong? As you can see the first part also gives no mention of the BBC and a few other parts. I got to the thought that this DML/LLM engine is allegedly used to filter out certain parts, until it can no longer hide a few things. Don’t forget whatever is done in DML/LLM is programmed by engineers, and whatever they say it is, that is what it becomes. People forget that and it is why thy fall in the AI trap, even though some clearly see that it is a fake solution. Don’t get me wrong DML and LLM are amazing inventions, but the courts will see through this and someone will blame the programmers and their bosses, this is why I saw the court cases come to blows in 2026. I particularly liked AI Misuse in Australian Courts (2026) where we see “over 73 cases identified where GenAI produced false citations.” So what AI does produce false citations? That requires a programmer. In addition, related to that is Warner v. Gilbarco, Inc. (February 2026) where we see the quote “AI to assist in case preparation does not automatically waive attorney-client privilege, characterizing broad requests for AI-generated documentation as a “fishing expedition”” Does this imply the AI uses deception to give us a “fishing expedition” or did (a massive perhaps) a programmer set this situation? As the evidence is added up, we get to see a different setting, a setting that gives notice that we should aim our attention to the programmers and their bosses. So at some point the influencers will be called into court and it is already happening “legal battles surrounding AI influencers, digital replicas, and content generation have shifted toward establishing liability for harmful outputs and defining the limits of AI-generated content protection. Key developments in early 2026 include lawsuits over AI-generated sexual content and major court decisions regarding copyright of AI-driven work.” Where we see (at present):
- Ashley St. Clair v. xAI (Deepfakes and Platform Liability)
- Tennessee Teens v. xAI (Child Safety)
- Thaler v. Perlmutter (Copyright of AI Art)
And as these cases are resolved, the influencer drive of AI will dissipate and we get these bosses to ‘present’ their view, but they will be careful as they are decently unwilling (as I see it) to become liable. So whilst I will look to find a party to allocate $5M (post taxation) to my coffers, I will try to remain vigilant and see what other things some of these ‘Captains of industry’ have been overlooking. Apparently some say I need a hobby, time will tell. Have a great day.


