Tag Archives: Eric Winter

IT said vs IT said

This is a setting we are about to enter. It was never rocket science, it was simplicity itself. And I mentioned it before, but now Forbes is also blowing the trumpet I mentioned in a clarion call in the past. The article (at https://www.forbes.com/councils/forbestechcouncil/2025/07/11/hallucination-insurance-why-publishers-must-re-evaluate-fact-checking/) gives us ‘Hallucination Insurance: Why Publishers Must Re-Evaluate Fact-Checking’ with “On May 20, readers of the Chicago Sun-Times discovered an unusual recommendation in their Sunday paper: a summer reading list featuring fifteen books—only five of which existed. The remaining titles were fabricated by an AI model.” We have seen these issues in the past. A Law firm stating cases that never existed is still my favourite at present. We get in continuation “Within hours, readers exposed the errors across the internet, sharply criticizing the newspaper’s credibility. This incident wasn’t merely embarrassing—it starkly highlighted the growing risks publishers face when AI-generated content isn’t rigorously verified.” We can focus on the setting about the high cost of AI errors, but as soon as the cost becomes too high, the staters of this error will get a Trump card and settle out of court, with the larger population being set in the dark on all other settings. But it goes into a nice direction “These missteps reinforce the reality that AI hallucinations and fact-checking failures are a growing, industry-wide problem. When editors fail to catch mistakes before publication, they leave readers to uncover the inaccuracies. Internal investigations ensue, editorial resources are diverted and public trust is significantly undermined.” You see, verification is key here and all of them are guilty. There is not one exception to this (as far as I can tell), there was a setting I wrote about this in 2023 in ‘Eric Winter is a god’ (at https://lawlordtobe.com/2023/07/05/eric-winter-is-a-god/) there on July 5th, I noticed a simple setting that Eric Winter (that famous guy from the Rookie) played a role in The Changeling (with the famous actor George C. Scott). The issue is two fold. The first is that Eric was less than 2 years old when the movie was made. The real person was Erick Vinther (playing a Young Man(uncredited)) This simple error is still all over Google, as I see it, only IMDB has the true story. This is a simple setting, errors happen, but in over 2 years that I reported it, no one fixed this. So consider that these errors creep into a massive bulk of data, personal data becomes inaccurate, and these errors will continue to seep into other systems. The fact that Eric Winter at some point sees his biography riddled with movies and other works where his memory fades under the guise of “Did I do this?”. And there will be more, as such verification becomes key and these errors will hamper multiple systems. And in this, I have some issues on the setting that Forbes paints. They give us “This exposes a critical editorial vulnerability: Human spot-checking alone is insufficient and not scalable for syndicated content. As the consequences of AI-driven errors become more visible, publishers should take a multi-layered approach” you see, as I see it, there is a larger setting with context checking. A near impossible setting. As people rely on granularity, the setting becomes a lot more oblique. A simple  example “Standard deviation is a measure of how spread out a set of values is, relative to the average (mean) of those values.” That is merely one version, the second one is “This refers to the error in a compass reading caused by magnetic interference from the vessel’s structure, equipment, or cargo.” 

Yet the version I learned in the 70’s is “Standard deviation, the offset between true north and magnetic north. This differs per year and the offset rotates in eastern direction in English it is called the compass deviation, in Dutch the Standard Deviation and that is the simple setting on how inaccuracies and confusions are entered in data settings (aka Meta Data) and that is where we go from bad to worse. And the Forbes article illuminates one side, but it also gives rise to the utter madness that this StarGate project will to some extent become. Data upon data and the lack of verification. 

As I see it, all these firms relying on ‘their’ version of AI and in the bowels of their data are clusters of data lacking any verification. The setting of data explodes in many directions and that lack works for me as I have cleaned data for the better pat of two decades. As I see it dozens of data entry firms are looking at a new golden age. Their assistance will be required on several levels. And if you doubt me, consider builder.ai, backed my none other than Microsoft and they were a billion dollar firm and in no time they had the expected value of zero. And after the fact we learn that 700 engineers were at the heart of builder.ai (no fault of Microsoft) but in this I wonder how Microsoft never saw this. And that is merely the start. 

We can go on on other firms and how they rely on ai for shipping and customer care and the larger setting that I speculatively predict is that people will try the stump the Amazon system. As such, what will it cost them in the end? Two days ago we were given ‘Microsoft racks up over $500 million in AI savings while slashing jobs, Bloomberg News reports’, so what will they end up saving when the data mismatches will happen? Because it will happen, it will happen to all. Because these systems are not AI, they are deeper machine learning systems optionally with LLM (Large Language Modules) parts and as AI are supposed to clear new data, they merely can work on data they have, verified data to be more precise and none of these systems are properly vetted and that will cost these companies dearly. I am speculating that the people fired on this premise might not be willing to return, making it an expensive sidestep to say the least. 

So don’t get me wrong, the Forbes article is excellent and you should read it. The end gives us “Regarding this final point, several effective tools already exist to help publishers implement scalable fact-checking, including Google Fact Check Explorer, Microsoft Recall, Full Fact AI, Logically Facts and Originality.ai Automated Fact Checker, the last of which is offered by my company.” So here we see the ‘Google Fact Check Explorer’, I do not know how far this goes, but as I showed you the setting with Eric Winter has been there for years and no correction was made. Even as IMDB doesn’t have this. I stated once before that movies should be checked against the age the actors (actresses too) had at the time of the making of the movie. And flag optional issues, in the case of Eric Winter a setting of ‘first film or TV series’ might have helped. And this is merely entertainment, the least of the data settings. So what do you think will happen when Adobe or IBM (mere examples) releases new versions and there is a glitch setting these versions in the data files? How many issues will occur then? I recollect that some programs had interfaces built to work together. Would you like to see the IT manager when that goes wrong? And it will not be one IT manager, it will be thousands of them. As I personally see it, I feel confident that there are massive gaps in the assumption of data safety of these companies. So as I introduced a term in the past namely NIP (Near Intelligent Parsing) and that is the setting that these companies need to fix on. Because there is a setting that even I cannot foresee in this. I know languages, but there is a rather large setting between systems and the systems that still use legacy data, the gaps in there are (for as much as I have seen data) decently massive and that implies inaccuracies to behold. 

I like the end of the Forbes article “Publishers shouldn’t blindly fear using AI to generate content; instead, they should proactively safeguard their credibility by ensuring claim verification. Hallucinations are a known challenge—but in 2025, there’s no justification for letting them reach the public.” It is a fair approach, but there is a rather large setting towards the field of knowledge where it is applied. You see, language is merely one side of that story, the setting of measurements. As I see it (using an example) “It represents the amount of work done when a force of one newton moves an object one meter in the direction of the force. One joule is also equivalent to one watt-second.” You see, cars and engineering use Joule in multiple ways, so what happens when the data shifts and values are missed? This is all engineer and corrector based and errors will get into the data. So what happens when lives are at stake? I am certain that this example goes a lot further than mere engineers. I reckon that similar settings exist in medical application, And who will oversee these verifications?

All good questions and I cannot give you an answer, because as I see it, there is no AI, merely NIP and some tools are fine with Deeper Machine Learning, but certain people seem to believe the spin they created and that is where the corpses will show up and more often than not in the most inconvenient times. 

But that might merely be me. Well time for me to get a few hours of snore time. I have to assassinate someone tomorrow and I want it too look good for the script it serves. I am a stickler for precision in those cases. Have a great day.

Leave a comment

Filed under Finance, IT, Media, Science

There is always a script

It seems that we are decently obsessed with series and movies based on video games. And there is plenty to be seen. In 2024 we got the first setting of the serialification (this might not be a real word) of the game and now applauded series Fallout. And there is plenty to applaud on this series. You see, this first season got 12 wins & 73 nominations. As such it was recognised on a global scale. So Fallout has broken the mould on a few levels. We will soon see a second season of The last of us. And there are a few more coming. There are mentions of Beyond Two Souls, a game based on the exploits Ellen Page (now Elliot Page) and had contributions of Willem Dafoe, Kadeem Hardison and Eric Winter. The game was quite excellent and I look forward to seeing the result on TV. 

And what else?
That is what this blog is about. You see, there is a game that was released in 2014, the title was Infamous: Second Son. I rated the game as average. I still do. Yet it was gameplay that hindered the high score. Metacritic gave it 80% (I gave it 75%). The game starts really good, it is after the first fight into Seattle that the issue starts. The game is too linear for a game of this style, there was initial an issue with the side missions, something that was fixed, but the linearity and the fact that the second power was too strong that made me give the rating as low as I did.

Still, the game had amazing sides too. The story was amazing on nearly every level. The power that you receive in the beginning was awesome. The other powers are really good too, but neon (second power) was overly strong. I would have switched Neon and video around and adjust the story (more like tweaking) and it would shape into a massive hit for the Sony studios. The funny part was that when the game came out I had never heard of this person named Banksy, as such the graffiti could be a nice edge on the storyline. 

Is that it?
Well yes, it is the recognition that the game had amazing properties that could easily turn into a TV series. For the horror/terror fans there is the notion of Prototype where Alex Mercer (you) is the target and the goal is finding out what happened. I particularly liked the achievement to drive over 65536 infected people and that is quite the grind. The story is captivating and the presumed special effect could be next level. There are legions of other games that could glue people to the TV and as ‘Hollywood’ seems to be running out of ideas the gaming solutions could propel them further. And as Sony is also streaming programs to the people they might look into the games they have. As I see it, plenty of options there. And that is the golden ticket for some. These games are propelling the stages to TV for a lot of actors and actresses. A setting that seemingly have been overlooked. And me? I am still watching whether Shogun will come to Blu-ray to Australia. (And a few more). So we should expect a new level of interaction between TV and gaming. I wonder who will bring it and what they will bring.

Have a great day.

Leave a comment

Filed under Finance, Gaming, Media

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

1 Comment

Filed under Finance, IT, Science