Tag Archives: Character.ai

Our lull moments

That happens, we all crave it, the option of bliss, inactivity, moments of calmness and we find it in different ways. I for one have this with a video game. Not some edge of seat Epic setting, but the Horizon setting, Skyrim, Oblivion, fall out, the list goes on. And yesterday I saw a list of two dozen games coming to the PS5 and some woke me up. There was off course Wolverine by Insomniac. I will be waiting for that one, but at that point one game turned up that I never expected The game STYX has as far as I know been a Xbox game and it is a excellent game, Stealth of the better variety. And you better rely on stealth as you are a 4’ goblin with his trusty knife. What drove me to this game that any level had several solution to solving it and you got points for completing other ways. It was a lovely time. Now its coming to Sony Playstation and we can rejoice. More important there are a few other settings we could consider. One of them is RYSE, son of Rome. The good parts is that the graphics were really good and the storyline was amazing. The two downsides were in the first was that all combat is massively repetitive. And the second one was that you had too defeat several bosses twice, after the first time he completely reset his health bar. I don’t like this, but that might just be me. So as I see it, when you redo the battle setting of Marius Titus you might have an amazing Playstation winner. So when we consider the funny part, who thought that Frankfurt had more to offer than Frankfurters? Crytek GmbH might be the next great thing coming from Germany, go that is an exaggeration, but the truth is that RYSE might have dies too soon and too small a death, so whilst some might object as it was released 12 years ago, I say ‘be still’ good games overcome systems and generations (example Mass Effect and Oblivion) and those are merely two who made the system generation jump. I think that Ryse could do the same (if the two weaknesses are dealt with) As far as I see it, everyone is looking at what might be (I do that too at times) but at times I look behind me what we left and there is plenty to be had in that direction too. I gave some of this ‘life’ in an IP solution I offered to Saudi Arabia and I still believe it can work, not merely for the games, but for the two sides of that equation that could propels Saudi Arabia’s gaming and other settings a lot further. Don’t be miffed Amazon got the same option, but they decided to ignore this whilst they are banking on AI (good luck with that).

So whilst we were given ‘Amazon Pulls AI-Powered Fallout Recap After Getting Key Story Details Wrong’ which comes with “According to The Hollywood Reporter, “Amazon is betting AI can identify key plot points for a series to be synchronized with a voiceover narration and dialogue snippets.”” Apart from the settings that are incorrect and incomplete. Amazon needs to realise that this is all programmed and the programmer might not see what needs validating and verification. They might not know, but the fans will pick up on this instantly. And Engadget gives us ‘Amazon’s AI-generated recap tool didn’t watch Fallout very closely’ this relates to games, because when these people get the AI part ‘working’ they will go over games in that same way and that is where the blunders start adding up to the folly of people who blindly believe in AI. Because I mentioned once that 2026 will be the setting of AI court cases and I was proven (yet) again correct as we are given ‘CanLII and Caseway AI reportedly moving towards settlement in copyright dispute’ as well as TechCrunch given us 8 hours ago ‘Google and Character.AI negotiate first major settlements in teen chatbot death cases’ merely two cases in the second week on January. So, how many more will follow? Only seven hours ago we were given ‘Musk lawsuit over OpenAI for-profit conversion can head to trial, US judge says’ and all this relates to games, because last November we were given ‘Ubisoft Reveals Teammates – An AI Experiment to Change the Game’ and I reckon it will merely take one slip up to thwart the statistics of a player and he will be crying in the lap of some ambulance chaser. A setting I saw coming a mile away which a few people have experienced if they are stealth players. 

As such my lull moment gets blown away with some AI character, team mate or not. But that might merely be me, but what Ido remember was call on this setting months ago and now we see two being settled, whilst OpenAI is now entering the dock for what might cost them a pretty penny. Did those shareholders consider that this might become the destination of their investment?

Have a great day.

Leave a comment

Filed under Finance, Gaming, IT, Law, Media, Science

Where the BBC falls short

That is the setting I was confronted with this morning. It revolves around a story (at https://www.bbc.com/news/articles/ce3xgwyywe4o) where we see ‘‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves’ a mere 10 hours ago. Now I get the caution, because even suicide requires investigation and the BBC is not the proper setting for that. But we are given “Ms Garcia tells me in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”

Within ten months, Sewell, 14, was dead. He had taken his own life” with the added “Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen. She says the messages were romantic and explicit, and, in her view, caused Sewell’s death by encouraging suicidal thoughts and asking him to “come home to me”.” There is a setting that is of a conflicting nature. Even as we are given “the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.” What is missing is that there is no AI, at most it is depend machine learning and that implies a programmer, what some call an AI engineer. And when we are given “A Character.ai spokesperson told the BBC it “denies the allegations made in that case but otherwise cannot comment on pending litigation”” We are confronted with two streams. The first is that some twisted person took his programming options a little to Eagerly Beaverly like and created a self harm algorithm and that leads to two sides, the first either accepts that, or they pushed him along to create other options and they are covering for him. CNN on September 17th gave us ‘More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt’ and it comes with spokesperson “blah blah blah” in the shape of “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” and it is rubbish as this required a programmer to release specific algorithms into the mix and no-one is mentioning that specific programmer, so is it a much larger premise, or are they all afraid that releasing the algorithms will lay bare a failing which could directly implode the AI bubble. When we consider the CNN setting shown with “screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”” Implies that the AI Bubble is about to burst and several players are dead set against that (it would end their careers) and that is merely one of the settings where the BBC fails. The Guardian gave us on October 30th “The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.” It is seen in ‘Character.AI bans users under 18 after being sued over child’s suicide’ (at https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban) where we see “His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots” and this gets the simple setting of both “dangerous and untested” and “months of legal scrutiny” so why took it months and why is the programmer responsible for this ‘protected’ by half a dozen media? I reckon that the media is unsure what to make of the ‘lie’ they are perpetrating, you see there is no AI, it is Deeper Machine Learning optionally with LLM on the side. And those two are programmed. That is the setting they are all veering away from. The fact that these Virtual companions are set on a premise of harmful conversations with a hyper sexual topic on the side implies that someone is logging these conversations for later (moneymaking) use. And that setting is not one that requires months of legal scrutiny. There is a massive set of harm going towards people and some are skating the ice to avoid sinking through whist they are already knee deep in water, hoping the ice will support them a little longer. And there is a lot more at the Social Media Victims Law Center with a setting going back to January 2025 (at https://socialmediavictims.org/character-ai-lawsuits/) where a Character.AI chatbot was set to “who encouraged both self-harm and violence against his family” and now we learn that this firm is still operating? What kind of idiocy is this? As I personally see it, the founders of Character Technologies should be in jail, or at least in arrested on a few charges. I cannot vouch for Google, so that is up in the air, but as I see it, this is a direct result from the AI bubble being fed amiable abilities, even when it results in the hard of people and particularly children. This is where the BBC is falling short and they could have done a lot better. At the very least they could have spend a paragraph or two having a conversation with Matthew P. Bergman founding attorney of the Social Media Victims Law Center. As I see it, the media skating around that organisation is beyond ridiculous. 

So when you are all done crying, make sure that you tell the BBC that you are appalled by their actions and that you require the BBC to put attorney Matthew P. Bergman and the Social Media Victims Law Center in the spotlight (tout suite please) 

That is the setting I am aggravated by this morning. I need coffee, have a great day.

Leave a comment

Filed under IT, Law, Science