Tag Archives: Grok

When Grok gets it wrong

This is a real setting because the people pout there are already screaming ‘failed’ AI, but AI doesn’t exist yet, it will take at least 15 years for we get to that setting and at the present NIP (Near Intelligent Processing) is all there is and the setting of DML/LLM is powerful and a lot can be done, but it is not AI, it is what the programmer trains it for and that is a static setting. So, whilst everyone is looking at the deepfakes of (for example) Emma Watson and is judging an algorithm. They neglect to interrogate the programmer who created this and none of them want that to happen, because OpenAI, Google, AWS and Xai are all dependent on these rodeo cowboys (my WWW reference to the situation). So where does it end? Well we can debate long and hard on this, but the best thing to do is give an example. Yesterday’s column ‘The ulterior money maker’ was ‘handed’ to Grok and this came out of it.

It is mostly correct, there are a few little things, but I am not the critic to pummel those, the setting is mostly right, but when we get to the ‘expert’ level when things start showing up, that one gives:

Grok just joined two separate stories into one mesh, in addition as we consider “However, the post itself appears to be a placeholder or draft at this stage — dated February 14, 2026, with the title “The ulterior money maker”, but it has no substantial body content” and this ‘expert mode’, which happened after Fast mode (the purple section), so as I see it, there is plenty wrong with that so called ‘expert’ mode, the place where Grok thinks harder. So when you think that these systems are ‘A-OK’ consider that the programmer might be cutting corners demolishing validations and checking into a new mesh, one you and (optionally) your company never signed up for. Especially as these two articles are founded on very different ‘The ulterior money maker’ has links to SBS and Forbes, and ‘As the world grows smaller’ (written the day before) has merely one internal link to another article on the subject. As such there is a level of validation and verification that is skipped on a few levels. And that is your upcoming handle on data integrity?

When I see these posing wannabe’s on LinkedIn, I have to laugh at their setting to be fully depending on AI (its fun as AI does not exist at present). 

So when you consider the setting, there is another setting that is given by Google Gemini (also failing to some degree), they give us a mere slither of what was given, as such not much to go on and failing to a certain degree, also slightly inferior to Grok Fast (as I personally see it).

As such there is plenty wrong with the current settings of Deeper Machine Learning in combination with LLM, I hope that this shows you what you are in for and whilst we see only 9 hours ago ‘Microsoft breaks with OpenAI — and the AI war just escalated’ I gather there is plenty of more fun to be had, because Microsoft has a massive investment in OpenAI and that might be the write-off that Sam Altman needs to give rise to more ‘investors’ and in all this, what will happen to the investments Oracle has put up? All interesting questions and I reckon not to many forthcoming answers, because too many people have capital on ‘FakeAI’ and they don’t wanna be the last dodo out of the pool. 

Have a great day.

Leave a comment

Filed under IT, Media, Science

Thinking of the Post

You’ve got it, the post, or more classically names the Washington Post. It has been on the mind of millions for the longest of times. In 1989 Robert Downey Jr. wishes he was a reporter for the Washington Post in Chances are. In 2017 Steven Spielberg makes Meryl Streep into its publisher Katharine Graham and over time there have been enough mentions and references to see that the Washington Post is (or sadly stated was) a global icon in news media. I still see it as a global icon, but I do realise that as a star in the top of the Christmas tree it has played its course and we all wonder how long it will hold out on that premiere position and perhaps that is how it will end, a true ornament of global media, the top of the tree. So I was a little taken back when this morning I saw the news (via most other media) that a third of its staff is about to be let go. So lets first start with what I personally see as a brazen lie, we see (at https://www.newyorker.com/news/annals-of-communications/how-jeff-bezos-brought-down-the-washington-post) ‘How Jeff Bezos Brought Down the Washington Post’ and it comes with the byline “The Amazon founder bought thereof  paper to save it. Instead, with a mass layoff, he’s forced it into severe decline.” It was bought in August 2013, Jeff Bezos purchased The Washington Post and other local publications, websites, and real estate for US$ 250 million, transferring ownership to Nash Holdings LLC, Bezos’s private investment company. I have to ask the simple question. How much did the Washington Post cost Jeff Bezos up to now? I think that a newspaper who should bring in millions a day is now see as “The Post has lost around 500,000 subscribers since the end of 2020 and was set to lose $100 million in 2023, according to The New York Times.” (Source: Wiki) As such the Washington post has costed Jeff Bezos well over $350,000,000 dollars. There are only so many ‘pretty pennies’ any billionaire can fork over. And nearly ALL AMERICANS are to blame here. Consider the simple truth. If you are an American and you have not bought at least 100 newspapers as since August 2014, you are part of that problem. And I am just considering you part of that problem if you did not buy 100 Newspapers in the period 2013-2026. There are a few more reasons, but that it the crunch. As there are 350,000,000 US Citizens, we can consider you part of that problem if you bought less than 100 newspapers in 12 years. The number should be a lot higher, but you might have the divide attention between the LA Times, Boston Globe, New York Times, Wall Street Journal and a few others. News media is on the way out and they have themselves to blame for it. Instead of setting proper media trenches, America let slip the setting by allowing 6,000 newspapers to exist in the USA. That is a separation of 58,400 people per newspaper and they are all vying for advertisement money, classifieds and attention. Is it a wonder that a place like the Washington Post goes down? The utter stupidity of that is beyond me to understand. I get that there are more newspapers, local newspapers, but consider that there are merely 50 states. Where did the 6,000 newspapers go? It comes down to 120 newspapers PER STATE. And with every iteration that is out there, the big ones suffer, I reckon that several of the newspapers I mentioned are in a similar predicament and that is before they consider the online presence they have or lack to have. As spoken we get the setting (at https://www.npr.org/2026/02/04/nx-s1-5699328/washington-post-layoffs-jobs-bezos) where we see ‘Bezos orders deep job cuts at ‘Washington Post’’ which has much more business sense as a setting and here we see “The Washington Post moved Wednesday at the behest of owner Jeff Bezos to cut a third of its entire workforce. The layoffs affect every corner of the newsroom. In a newsroom Zoom call, Executive Editor Matt Murray called the move “a strategic reset” it needs to compete in the era of artificial intelligence. The paper had not evolved with the times, he said, and the changes were overdue in light of “difficult and even disappointing realities.”” Which is note completely true either. You see, there is no AI at present and all who are appeasing to that “lie” are selling themselves short. Actually AI is still two decades away but the setting that is now coming is creation of events through DML and LLM is real and when verification and validation happens it will become a problem, but as I see it, there is no real validation and verification and that happens by REAL journalists at present (editors too) but as I see it, created stories are a problem and an AI could do my blog if it had what I have and as I see it places like Grok are no where ready because they lack the ability to cater to multi dimensional viewpoints, so at present I am still a superior power there. I reckon that some of these journalistic dinosaurs are similar too (if they are not part of the Jurassic Park franchise) and that is the value they currently have and it is a dwindling setting at present. So when we get

It is hard to disagree (I don’t have any facts on that, but the setting of 3,000,000 paying subscribers does have a handle, it might be too small, it might not, but I think it is too small and the 6,000 newspapers are part of that. I think that a newspaper needs to have journalists, it needs to have a national/global section and I think that over 2,000 papers are unable to do that. They all hope for the materials that floats them and the advertisements that bring them dollars. Not a way to run any newspaper (my number 2,000 is purely speculative and arbitrary) but to see that one third of the newspapers are unable to fill such a gap and merely capture the faces that read headlines is part of the problem. 

That is the setting that I fear that is part of the problem and I do not agree with Jeff Bezos, but he was not part of that problem ever. And to give you the other setting, Wikipedia gives us “In 2018, Khashoggi was murdered by Saudi agents in Istanbul.” This is a blatant lie. I ripped a hole in that UN research with nothing more than logic and there are more settings that never made sense, but that is the world we are in now, “Guilty until proven innocent” that is the era that what some call AI is vying for and until there is proper verification and validating that is all you will get and at some point someone will say “If only we still had the Washington Post” but that is the moment when this is too late. I might live long enough for that moment too come (I am no longer a spring chicken). And at the speed things are coming, I will see this moment and say “You see, I was right all along”, but those in the thick of things will not care and others will feel to hopeless. 

That is the refractionary reality of things to come. Have a great day.

Leave a comment

Filed under Finance, Media

Call it like it is

That is how I usually flow, it doesn’t make me loved or at times even appreciated, but that is me, oversimplifying problems on the handle, or is that off the handle? So whilst I was watching the BS wannabe influencers going all out on Elon Musk and Grok. There are two settings. The first is some have a case, but others do not, they are all out to get the maximum out of Grok (and its owner). I tend to take a less obvious course of action. You see, what everyone is ‘ignoring’ (intentional or not) is that AI doesn’t yet exist. So this is all DML and with an optional setting of LLM. And they are powerful tools, but they are programmed and that is what some want to avoid looking at. The programmer has no or little wealth and in law Torts tells us to go after the money and that was a fact long before Donoghue v Stevenson 1932 AC 562. And that setting is still used today, where these plaintiffs go after the money. Yet I am of a different stroke. I want the issue stopped. One thing to do this is to use the risen of nonrepudiation. You and only you could have done this. So we address that in the software, there are legitimate reasons and non-legitimate reasons and ALL have to submit there data and whilst that is done, (not unlike Meta) we capture all the data we can as such we have a batter of data and it is connected to (or imbedded) into the picture, but that is not enough. The provider keeps a copy of the image with a hidden watermark (an encryption technique I designed for other reasons) and that goes with every picture. It is there hidden and that is how nonrepudiation works. We might not prevent a lot, but now we can do something about it, not tomorrow, not next week or next year. But now. And the people who don’t want to do that, they can find another solution. But this fleecing Elon Musk and whatever company he has now needs to stop because those who want to go the ‘Torts’ way are part of the problem and not part of the solution. This is how I see it.

And lets be clear, some actions like ‘nudifying’ a famous person is wrong, even if you are the husband. And lets be clear, that act is also set to a 90% likely that you are not capturing the whole and correct image, so there is that too. And weirdly enough there are plenty of stars showing off their priceless pairs in fashion shows (example: Olivia Wilde) and they are ‘willing’ to do that, so save that picture for all its worth, but leave the others alone. Now, I get it, there is always a horny teenager that wants to show off that ‘he’ and ‘he’ alone was ‘given’ an image from his star and showing this off to his friends (who are likely using the same solution) and they have their own starlet, its like the Generation Alpha version on who has the biggest dick. 

And when they realise what they have done, that usually comes when their wives are giving birth to a daughter, reality hits. That is in 15 years and something needs to be done now, as such add nonrepudiation to this equation makes people wonder what to do. Some will wipe the image, but if the provider have the copy and connected data that will not be enough. When the hidden data is added to whatever that person gives is verified, we see the first red flags erupt and that is how the game is changed and those ‘privacy’ geeks out there, there is no privacy on nudifying images and as such the woman in question gets to have the right to prosecute and the maker of the image is to be prosecuted, not Elon Musk, Not Grok because there is a whole range of reasons why a filter was created, some are funny (like the Shrek faces on everyone in a video) and a few others, some filters are there to correct like remove dopey dodger photobombing away from a photo, but to remove clothing from a person is not. 

There is a likelihood of me missing out on a few items and that is fine but the setting I needed to give is here, we need to prosecute and shame the ones doing the deed, that is overlooked by pretty much all the writers of the anti-Grok brigade. None of them is holding these people to account and adding nonrepudiation is doing just that, taking care of the culprits and those mommies and daddies saying that they were merely kids having fun need to realise that there is no innocence in that setting, they failed as educators. 

So have a great day and consider what can happen when we go after the transgressors. 

Leave a comment

Filed under IT, Law, Media

And Grok ploughed on

That happens, but after yesterdays blog ‘The sound of war hammers’ (at https://lawlordtobe.com/2025/11/27/the-sound-of-war-hammers/) I got a little surprise. I could not have I want to planned it better.

You see, the article is about the AI bubble and a few other settings. So at times, I want Grok to take a look. No matter what you think, it tends to be a decent solution in DML and I reckon that Elon Musk with his 500,000 million (sounds more impressive then $500B) has sunk a pretty penny in this solution. I have seen a few shortcomings, but overall a decent solution. As I personally see it (for as far as I have seen it) that solution has a problem looking into and through multidimensional viewpoints. That is how I usually take my writing as I am overwhelmed at times with the amount of documentation I go through on a daily basis. As such I got a nice surprise yesterday.

So the story goes of with war hammers (a hidden stage there) then I go into the NPR article and I end up with the stage of tourism (the cost as the Oxford Economics report gives us) and I am still digging into that. But what does Grok give me?

The expert mode gives us:

Now, in the article I never mentioned FIFA, the 2026 World Cup or Saudi Arabia, so how did this program come to this? Check out the blog, none of those elements were mentioned there. As some tell us Grok is a generative artificial intelligence (generative AI) chatbot developed by xAI. So where is that AI program now? This is why I made mention in previous blogs that 2026 will be the year that the class actions will start. In my case, I do not care and my blog is not that important, even if it was, it was meant for actual readers (the flesh and blood kind) and that does not apply to Grok. I have seen a few other issues, but this yesterday and in light of the AI bubble story yesterday (17 hours ago) pushed this to the forefront. I could take ‘offense’ to the “self-styled “Law Lord to be”” but whatever and I have been accused of a lot worse by actual people too. And the quote “this speculation to an unusual metaphor of “war hammers”” shows that Grok didn’t see through my ruse either (making me somewhat proud), which is ego caressing at best, but I have an ego, I merely don’t let it out to often (it tends to get a little too frisky with details) and at present I see an idea that both the UAE and Saudi Arabia could use in their entertainment. There is an upgrade for Trojena (as I see it), there are a few settings for the Abu Dhabi Marina as well. All in a days work, but I need to content with data to see how that goes. And I tend to take my ideas into a sifter to get the best materials as fine as possible, but that was today, so there will be more coming soon enough. 

But what do you do when an AI system bleeds information from other sources? Especially when that data is not validated or verified and both seem to be the case here. As I see it, there is every chance that some will direct these AI systems to give the wrong data so that these people can start class actions. I reckon that not too many people are considering this setting, especially those in harms way. And that is the setting that 2026 is likely to bring. And as I see it, there will be too many law firm of the ambulance chaser kind to ignore this setting. That is the effect that 8 figure class actions tend to bring and with the 8 figure number I am being optimistic. When I see what is possible there is every chance that any player in this field is looking at 9 or even 10 figure settlements, especially when it concerns medical data. And no matter what steps these firms make, there will be an ambulance chaser who sees a hidden opportunity. Even if there is a second tier option where a Cyber attack can launch the data into a turmoil, those legal minds will make a new setting where those AI firms never considered the implications that it could happen.

I am not being dramatic or overly doom speaking. I have seen enough greed all around me to see that this will happen. A mere three months ago we saw “The “Commonwealth Bank AI lawsuit” refers to a dispute where the Finance Sector Union (FSU) challenged CBA for misleading staff about job cuts related to an AI chatbot implementation. The bank initially made 45 call centre workers redundant but later reversed the decision, calling it a mistake after the union raised concerns at the Fair Work Commission. The case highlighted issues of transparency, worker support, and the handling of job displacement due to AI.” So at that point, how dangerous is the setting that any AI is trusted to any degree? And that is before some board of directors sets the term that these AI investments better pay off and that will cause people to do silly (read: stupid) things. A setting that is likely to happen as soon as next year. 

And at this time, Grok is merely ploughing on and set the stage where someone will trust it to make life changing changes to their firm, or data and even if it is not Grok, there is all the chances that OpenAI will do that and that puts Microsoft in a peculiar stage of vulnerable.

Have a great day, time for some ice cream, it was 33 degrees today, so my living room is hot as hell, as such ice cream is my next stage of cooling myself.

1 Comment

Filed under Finance, IT, Media, Science

Driving the herds

OK, I am over my anger spat from yesterday (still growling though) and in other news I noticed that Grok (Musk’s baby) cannot truly deal with multidimensional viewpoints, which is good to know. But today I tried to focus on Oracle. You know whatever AI bubble will hit us (and it will) Oracle shouldn’t be as affected as some of the Data vendors who claim that they have the golden AI child in their crib (a good term to use a month before Christmas). I get that some people are ‘sensitive’ to doom speakers we see all over the internet and some will dump whatever they have to ‘secure’ what they have, but the setting of those doom speakers is to align THEIR alleged profit needs to others dumping their future. I do not agree. You see Oracle, Snowflake and a few others offer services and they are captured by others. Snowflake has a data setting that can be used whether AI comes or not, whether people need it or not. And they will be hurt when the firms go ‘belly up’ because it will count as lost revenue. But that is all it is, lost revenue. And yes both will be hurting when the AI bubble comes crashing down on all of us. But the stage that we see is that they will skate off the dust (in one case snow) and that is the larger picture. So I took a look at Oracle and behold on Simple Wall Street we get ‘Oracle (ORCL) Is Down 10.8% After Securing $30 Billion Annual Cloud Deal – Has The Bull Case Changed?’ (At https://simplywall.st/stocks/us/software/nyse-orcl/oracle/news/oracle-orcl-is-down-108-after-securing-30-billion-annual-clo) With these sub-line points:

So they triple their ‘business’ and they lose 10.8%? It leads to questions. As I personally see it, Wall Street is trying to insulate themselves from the bubble that other (mostly) software vendors bring to the table. And Simply Wall Street gives us “To believe in Oracle as a shareholder right now is to trust in its transformation into a major provider of cloud and AI infrastructure to sustain growth, despite high debt and reliance on major AI customers. The recent announcement of a US$30 billion annual cloud contract brings welcome long-term visibility, but it does not change the near-term risk: heavy capital spending and dependence on sustained AI demand from a small set of large clients remain the central issues for the stock.” And I can get behind that train of thought, although I think that Oracle and a few others are decently protected from that setting. No matter how the non existent AI goes, DML needs data and data needs secure and reliable storage. So in comes Oracle in plenty of these places and they do their job. If 90% business goes boom, they will already have collected on these service terms for that year at least, 3-5 years if they were clever. So no biggy, Collect on 3-5 years is collected revenue, even if that firm goes bust after 30 days, they might get over it (not really). 

And then we get two parts “Oracle Health’s next-generation EHR earning ONC Health IT certification stands out. This development showcases Oracle’s commitment to embedding AI into essential enterprise applications, which supports a key catalyst: broadening the addressable market and stickiness of its cloud offerings as adoption grows across sectors, particularly healthcare. In contrast, investors should be aware that the scale of Oracle’s capital commitment brings risks that could magnify if…” OK, I am on board with these settings. I kinda disagree, but then I lack economic degrees and a few people I do know will completely see this part. You see, I personally see “Oracle’s commitment to embedding AI into essential enterprise applications” as a plus all across the board. Even if I do believe that AI doesn’t exist, the data will be coming and when it is ironed out, Oracle was ready from the get go (when they translate their solutions to a trinary setting) and I do get (but personally disagree) with “the scale of Oracle’s capital commitment brings risks that could magnify if”. Yes, there is risk but as I see it Oracle brings a solution that is applicable to this frontier, even if it cannot be used to its full potential at present. So there is a risk, but when these vendors pay 5 years upfront, it becomes instant profit at no use of their clouds. You get a cloud with a population of 15 million, but it is inhabited by 1.5 million. As such they have a decade of resources to spare. I know that things are not that simple and there is more, but what I am trying to say is that there is a level of protection that some have and many will not. Oracle is on the good side of that equation (as is Snowflake, Azure, iCloud, Google Gemini and whatever IBM has, oh, and the chips of nVidia are also decently safe until we know how Huawei is doing. 

And the setting we are also given “Oracle’s outlook forecasts $99.5 billion in revenue and $25.3 billion in earnings by 2028. This is based on annual revenue growth of 20.1% and an earnings increase of $12.9 billion from current earnings of $12.4 billion” matters as Oracle is predicting that revenue comes calling in 2028, so anyone trying to dump their stock now is as stupid as they can be. They are telling their shareholders that for now revenue is thimble sized, but after 2028 which is basically 24 months away, the big guns come calling and the revenue pie is being shared with its shareholders. So you do need brass balls to do this and you should not do this with your savings, that is where hedge funds come in, but the view is realistic. The other day I saw Snowflake use DML in the most innovative way (one of their speakers) showed me a new lost and found application and it was groundbreaking. Considering the amounts of lost and found is out there at airports and bus stations, they showed me how a setting of a month was reduced to a 10 minute solution. As I saw it, places like Dubai, London and Abu Dhabi airport could make is beneficial for their 90 million passengers is almost unheard of and I am merely mentioning three of dozens upon dozens of needy customers all over the world. A direct consequence of ‘AI’ particulars (I still think it is DML with LLM) but no matter the label, it is directly applicable to whomever has such a setting and whilst we see the stage of ‘most usage fails in its first instance’ this is not one of them and as such in those places Oracle/Snowflake is a direct win. A simple setting that has groundbreaking impact. So where is the risk there? I know places have risks, but to see this simple application work shows that some are out there showing the good fight on an achievable setting and no IP was trained upon and no class actions are to follow. I call that a clear win.

So, before you sell your stock in Oracle like a little girl, consider what you have bought and consider who wants you to sell, and why, because they are not telling you this for your sake, they have their own sake. I am not telling you to sell anything. I am merely telling you to consider what you bought and what actual risks you are running if you sell before 2029. It is that simple.

Have a great day (yes Americans too, I was angry yesterday), These bastards in Vancouver and Toronto are still enjoying their Saturday. 

Leave a comment

Filed under Finance, IT, Media, Science

Labels

That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things. 

And example can be seen in

And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.

But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.

The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions. 

The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.

So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best. 

Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved. 

As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won. 

It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.

Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

When a cigar is anything but

That is the expression as I see it. It is based on the freud premise of ‘Sometimes a cigar is just a cigar’ And it fits the setting. You see, on August 31st I wrote ‘The wave of brains’ (at https://lawlordtobe.com/2025/08/31/the-wave-of-brains/)

I set up a new RPG game, a new gaming IP and in a non-related issue I accidentally clicked on the ‘Grok’ button, which gave me a new setting on me. Now I did it intentionally and it gave me a few items, it also gave me an idea I did not have before. The idea came from my school days (my merchant navy school days) and it came to me that the idea could have multiple applications. 

So as I was ‘given’:

And there was more, but this gave me a few ideas. In the late 70’s there was something called RADAR scan transmissions. It used a RADAR to send communications around, I never saw it myself but I heard of it. Now consider that some use display software to use it in another way. Like the setting of images that are used (through personalised filters) to create art. That setting can be used in two ways, to rely on the art to help you find stuff, or the camera and filters to see other places. For instance scanning a picture might give you location data, images could given you personal references. That might be used in higher skill levels to make the game more challenging. For example the image of a droid hight give you the Droid Identification Number (DIN) is a unique 17-character code used to identify a specific droid, similar to a droid’s fingerprint. These are some of the settings that can be used to find other places and other (not in THAT dome) places. Certain cameras now have identifiers and some of that can be embedded in images. It allows for scanners to identify elements and we can use that to gain access to some places in a dome not considered before. 

These are some of the ideas that bring a stronger presence of gaming IP and when the IP is powerful enough, you get a new following and a stronger franchise. A stronger franchise was not my idea as a game needs to have an end and going on and on is not my way, but a stringer presence for any IP is always welcome and I found that setting as I was researching the Grok output. I had actually not anticipated that idea. Whatever will I think of next? 

Yet the setting of a larger technology presence was always my plan as I see that when too much of the SciFi becomes Fi, Fiction leads to fantasy and I am not against it, but I feel the need to adhere to a larger input of science and as my grandfather used to say, there is no place for science in Tartarus. Grandpapa was probably right. There is also a need for weapons, but not essentially personal weapons. The need to set traps might make a better droid killer than anything I could think of, but how to do that? Well, there are a few settings and only needed for security and military droids, the rest is basically harmless. The benefit of that, is that you will need to play strict rules with the stuff you find. And as the astronaut (played by you) might be a dunes in this regard, the need to create books or instruction discs that can be put into a viewer so that you gain these skills. For that I have a setting of multiple helmets (sprayed all over the game), some are mental and other helmets are suit connected systems string the knowledge to your body. In context (speculatively) Microsoft took years to launch a reboot of Fable (which was an awesome game) I wrote in months the setting of an entirely new RPG, so there. And should you doubt this (that would be a fair call), I have done this 4 times already. So where it there non existing AI now? 

So as they invaded the sanctity of my gaming life, I gave my gaming IP to everyone else, so where are their billons now? Whilst they aren’t getting it done, all the others will get a head start to gain momentum. That is how I roll (I am a vindictive bastard at times).

So back to the game. As I see other avenues to thwart detection, I am also setting the idea to make you viral into the security offices to set your ID to a ‘allowed and safe’ setting. The other systems will allow you to go outside the dome to repair and connect solar panels and as Mars gets a mere 44%, more panels are needed. In the later stages you will have to repair a fusion reactor (with a little snag or two). Oh, that reminds me, there will be a setting for survivors (a salute to System Shock), but that will take a little more time.

All this and more are now settling in my brain and as the ride between the domes gets to be repaired, I just thought of a new setting for the third dome, there is no absence of energy, but a massive abundance of energy which gave that dome its own problems and as such so will you. But that will be for another day and that is all for today ( the new day just started 11 minutes ago), so have a great day and I will snore and think of new gaming IP if possible.

Leave a comment

Filed under Gaming, IT, Science

Microsoft in the middle

Well, that is the setting we are given however, it is time to give them some relief. It isn’t just Microsoft, Google and all other peddlers handing over AI like it is a decent brand are involved. So the BBC article (at https://www.bbc.com/news/articles/c24zdel5j18o) giving us ‘Microsoft boss troubled by rise in reports of ‘AI psychosis’’ Is a little warped. First things first. What is Psychosis? Psychosis is a setting where we are given “Psychosis refers to a collection of symptoms that affect the mind, where there has been some loss of contact with reality. During an episode of psychosis, a person’s thoughts and perceptions are disrupted and they may have difficulty recognizing what is real and what is not.” Basically the settings most influencers like to live by. Many do this already for for the record. The media does this too.

As such people are losing grips with reality. So as we see the malleable setting that what we see is not real, we get the next setting. As people lived by the rule of “I’ll believe it when I see it” for decades, this is becomes a shifty setting. So whilst people want to ‘blame’ Microsoft for this, as I see it, the use of NIP (Near Intelligent Parsing) is getting a larger setting. Adobe, Google, Amazon. They are all equally guilty.

So as we wonder how far the media takes this?

I’ll say, this far.

But back to the article. The article also gives us “In a series of posts on X, he wrote that “seemingly conscious AI” – AI tools which give the appearance of being sentient – are keeping him “awake at night” and said they have societal impact even though the technology is not conscious in any human definition of the term.” I respond that giving any IT technology a level 8 question (user level) and it responds like it is casually true, it isn’t. It comes from my mindset that states if sarcasm bounces back, it becomes irony.

So whilst we see that setting in ““There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote. Related to this is the rise of a new condition called “AI psychosis”: a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.” It is kinda true, but the most imaginative setting of the use of Grok tends to be 

I reckon we are safe for a few more years. And whilst we pour over the essentials of TRUE AI, we tend to have at least two decades and even then only the really big players can offered it, as such there is a chance the first REAL AI will respond with “我們可以為您提供什麼協助?” As I see it, we are safe for the rest of my life.

So whilst we consider “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.” Consider that law shops and most advocacies give initial free advice, they want to ascertain if it pays to go that way for them. So whilst we are given that it doesn’t pay, a real barrister will see that this is either lawless, trivial or too hard to prove. And he will give you that answer. And that is the reality of things. Considering that ChatGPT is any kind of solution makes you eligible for the Darwin award. It is harsh, but that is the setting we are now in. It is the reality of things that matter and that is not on any of these handlers of AI (as they call it). And I have written about AI several times, so it it didn’t stick, its on you.

Have a great day and don’t let the rain bother you, just fire whomever in media told you it was gonna rain and get a better result.

Leave a comment

Filed under IT, Media, Science