That is the setting I was confronted with this morning. It revolves around a story (at https://www.bbc.com/news/articles/ce3xgwyywe4o) where we see ‘‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves’ a mere 10 hours ago. Now I get the caution, because even suicide requires investigation and the BBC is not the proper setting for that. But we are given “Ms Garcia tells me in her first UK interview. “And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”
Within ten months, Sewell, 14, was dead. He had taken his own life” with the added “Ms Garcia and her family discovered a huge cache of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Targaryen. She says the messages were romantic and explicit, and, in her view, caused Sewell’s death by encouraging suicidal thoughts and asking him to “come home to me”.” There is a setting that is of a conflicting nature. Even as we are given “the first parent to sue Character.ai for what she believes is the wrongful death of her son. As well as justice for him, she is desperate for other families to understand the risks of chatbots.” What is missing is that there is no AI, at most it is depend machine learning and that implies a programmer, what some call an AI engineer. And when we are given “A Character.ai spokesperson told the BBC it “denies the allegations made in that case but otherwise cannot comment on pending litigation”” We are confronted with two streams. The first is that some twisted person took his programming options a little to Eagerly Beaverly like and created a self harm algorithm and that leads to two sides, the first either accepts that, or they pushed him along to create other options and they are covering for him. CNN on September 17th gave us ‘More families sue Character.AI developer, alleging app played a role in teens’ suicide and suicide attempt’ and it comes with spokesperson “blah blah blah” in the shape of “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including self-harm resources and features focused on the safety of our minor users. We have launched an entirely distinct under-18 experience with increased protections for teen users as well as a Parental Insights feature,” and it is rubbish as this required a programmer to release specific algorithms into the mix and no-one is mentioning that specific programmer, so is it a much larger premise, or are they all afraid that releasing the algorithms will lay bare a failing which could directly implode the AI bubble. When we consider the CNN setting shown with “screenshots of the conversations, the chatbot “engaged in hypersexual conversations that, in any other circumstance and given Juliana’s age, would have resulted in criminal investigation.”” Implies that the AI Bubble is about to burst and several players are dead set against that (it would end their careers) and that is merely one of the settings where the BBC fails. The Guardian gave us on October 30th “The chatbot company Character.AI will ban users 18 and under from conversing with its virtual companions beginning in late November after months of legal scrutiny.” It is seen in ‘Character.AI bans users under 18 after being sued over child’s suicide’ (at https://www.theguardian.com/technology/2025/oct/29/character-ai-suicide-children-ban) where we see “His family laid blame for his death at the feet of Character.AI and argued the technology was “dangerous and untested”. Since then, more families have sued Character.AI and made similar allegations. Earlier this month, the Social Media Law Center filed three new lawsuits against the company on behalf of children who have either died by suicide or otherwise allegedly formed dependent relationships with its chatbots” and this gets the simple setting of both “dangerous and untested” and “months of legal scrutiny” so why took it months and why is the programmer responsible for this ‘protected’ by half a dozen media? I reckon that the media is unsure what to make of the ‘lie’ they are perpetrating, you see there is no AI, it is Deeper Machine Learning optionally with LLM on the side. And those two are programmed. That is the setting they are all veering away from. The fact that these Virtual companions are set on a premise of harmful conversations with a hyper sexual topic on the side implies that someone is logging these conversations for later (moneymaking) use. And that setting is not one that requires months of legal scrutiny. There is a massive set of harm going towards people and some are skating the ice to avoid sinking through whist they are already knee deep in water, hoping the ice will support them a little longer. And there is a lot more at the Social Media Victims Law Center with a setting going back to January 2025 (at https://socialmediavictims.org/character-ai-lawsuits/) where a Character.AI chatbot was set to “who encouraged both self-harm and violence against his family” and now we learn that this firm is still operating? What kind of idiocy is this? As I personally see it, the founders of Character Technologies should be in jail, or at least in arrested on a few charges. I cannot vouch for Google, so that is up in the air, but as I see it, this is a direct result from the AI bubble being fed amiable abilities, even when it results in the hard of people and particularly children. This is where the BBC is falling short and they could have done a lot better. At the very least they could have spend a paragraph or two having a conversation with Matthew P. Bergman founding attorney of the Social Media Victims Law Center. As I see it, the media skating around that organisation is beyond ridiculous.
So when you are all done crying, make sure that you tell the BBC that you are appalled by their actions and that you require the BBC to put attorney Matthew P. Bergman and the Social Media Victims Law Center in the spotlight (tout suite please)
That is the setting I am aggravated by this morning. I need coffee, have a great day.
I was having a ball this morning. I was alerted to an article that was published 11 hours ago, that makes all the difference and in particular the setting of me telling all others “Told you so” So as we start seeing the crumbling reality of a bubble coming to pass, I get to laugh at the people calling me stupid. You see, Ted’s Hardware is giving us )at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-ceo-says-the-company-doesnt-have-enough-electricity-to-install-all-the-ai-gpus-in-its-inventory-you-may-actually-have-a-bunch-of-chips-sitting-in-inventory-that-i-cant-plug-in) with ‘Microsoft CEO says the company doesn’t have enough electricity to install all the AI GPUs in its inventory’ so there I was (with a few critical minds) telling you all that there isn’t enough energy to fuel this setting of these data centers (like StarGate) and now Microsoft (as I personally see it, king of the losers) is confirming this setting. So do you think this (for now) multi trillion dollar company cannot pay his energy bill, or are they scraping the bottom of the energy well. And when we come to think of that, when the globally placed 200,000 people (not just Microsoft) are laid off and there is no energy to fuel their (alleged) AI drive, how far behind is the recession that ends all recessions in America? It might not be the great depression, as that gave them nearly 15 million Americans or 25% of that workforce unemployed. But the trickle effect are a lot bigger now and when that much goes overboard, the American social security will take a massive beating.
So as I have been stating this lack of energy for months (at least months) we are given “Microsoft CEO Satya Nadella said during an interview alongside OpenAI CEO Sam Altman that the problem in the AI industry is not an excess supply of compute, but rather a lack of power to accommodate all those GPUs. In fact, Nadella said that the company currently has a problem of not having enough power to plug in some of the AI GPUs the firm has in inventory. He said this on YouTube in response to Brad Gerstner, the host of Bg2 Pod, when asked whether Nadella and Altman agreed with Nvidia CEO Jensen Huang, who said there is no chance of a compute glut in the next two to three years.” Oh, didn’t I say so a few times? Oh, yes. On January 31st 2024 I wrote “When the UAE engages with that solution, America will come up short in funds and energy. So the ‘suddenly’ setting wasn’t there. This has been out in the open for up to 4 years. And that picture goes from bad to worse soon enough.” I did so in ‘Forbes Foreboding Forecast’ which I did (at https://lawlordtobe.com/2024/01/31/forbes-foreboding-forecast/) so there is a record and the setting of energy shortage was visible over a year ago, I even published a few articles how Elon Musk (he has the IP) to get into that field in a few ways. You see, either you contribute directly, or you remove the overhead of energy, which Elon Musk was in a perfect stage to do.
So, when your chickens come home to roost and such agrarian settings, it becomes a party and a half.
And then we get the BS (that stuff that makes grass grow in Texas) setting that follows with ““I think the cycles of demand and supply in this particular case, you can’t really predict, right? The point is: what’s the secular trend? The secular trend is what Sam (OpenAI CEO) said, which is, at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it’s power — it’s sort of the ability to get the builds done fast enough close to power,” Satya said in the podcast. “So, if you can’t do that, you may actually have a bunch of chips sitting in inventory that I can’t plug in. In fact, that is my problem today. It’s not a supply issue of chips; it’s actually the fact that I don’t have warm shells to plug into.”” It is utter BS (in my personal view) as I predicted this setting over 639 days ago and I am certain that I am not that much more intelligent than that guy who controls Microsoft (aka Satya Nadella) and that is the short and sweet of it. I might be elevated in dopamines at present, but to see Satya admit to the setting I proclaimed for some time gives a rather large rise to the upcoming StarGate settings and the rather large need to give energy to that setting. It is about to become a whole new ballgame.
And as the Cookie crumbles the tech firms and the Media will all point at each others but as I see it, both were not doing they jobs. I am willing to throw this on the pile of shortcomings that courtesans have as the cater to digital dollars, but that song has been played a few times over. And I am slightly too tired (and too energised) to entertain that song. I want to play something new and perhaps a new Gaming IP might solve that for me today (likely tomorrow).
A setting we are given and as we see the admission on Ted’s Hardware, Some might actually investigate how much energy they are about to come short on. But don’t fret, these tech companies will happily take the energy due to consumers as they can afford the new prices with are likely to be over 10% higher than the previous prices. It is the simple setting of demand and supply. They already fired over 40,000 people (a global expected number), so do you think that they will stop to consider your domestic needs over the bubble they call AI, to show that they can actually fuel that setting? Gimme a break.
So Youtube has a few video on surviving life in a setting where there is no energy, if that fails ask the people in the Ukraine. They have been battling that setting for some time.
Time to enjoy my dopamine rush and have a walk in a nice walk in the 83 degree Fahrenheit shadow. Makes me think about the hidden meaning behind 451 Fahrenheit by Ray Bradbury. Wasn’t the hidden setting to stop questioning the reality of things and rely on populism? Isn’t that what we see at present? I admit that no books are being burned, but removing them from the view is as bad as burning them. Because when the media is ignoring energy needs, what does that spell in the mind of some? So have a great day and see what you can get that does not require electricity.
There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them.
So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water.
All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions).
A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?
Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat.
So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.
A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.
Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.
Well, that is the setting we are given however, it is time to give them some relief. It isn’t just Microsoft, Google and all other peddlers handing over AI like it is a decent brand are involved. So the BBC article (at https://www.bbc.com/news/articles/c24zdel5j18o) giving us ‘Microsoft boss troubled by rise in reports of ‘AI psychosis’’ Is a little warped. First things first. What is Psychosis? Psychosis is a setting where we are given “Psychosis refers to a collection of symptoms that affect the mind, where there has been some loss of contact with reality. During an episode of psychosis, a person’s thoughts and perceptions are disrupted and they may have difficulty recognizing what is real and what is not.” Basically the settings most influencers like to live by. Many do this already for for the record. The media does this too.
As such people are losing grips with reality. So as we see the malleable setting that what we see is not real, we get the next setting. As people lived by the rule of “I’ll believe it when I see it” for decades, this is becomes a shifty setting. So whilst people want to ‘blame’ Microsoft for this, as I see it, the use of NIP (Near Intelligent Parsing) is getting a larger setting. Adobe, Google, Amazon. They are all equally guilty.
So as we wonder how far the media takes this?
I’ll say, this far.
But back to the article. The article also gives us “In a series of posts on X, he wrote that “seemingly conscious AI” – AI tools which give the appearance of being sentient – are keeping him “awake at night” and said they have societal impact even though the technology is not conscious in any human definition of the term.” I respond that giving any IT technology a level 8 question (user level) and it responds like it is casually true, it isn’t. It comes from my mindset that states if sarcasm bounces back, it becomes irony.
So whilst we see that setting in ““There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote. Related to this is the rise of a new condition called “AI psychosis”: a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.” It is kinda true, but the most imaginative setting of the use of Grok tends to be
I reckon we are safe for a few more years. And whilst we pour over the essentials of TRUE AI, we tend to have at least two decades and even then only the really big players can offered it, as such there is a chance the first REAL AI will respond with “我們可以為您提供什麼協助?” As I see it, we are safe for the rest of my life.
So whilst we consider “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.” Consider that law shops and most advocacies give initial free advice, they want to ascertain if it pays to go that way for them. So whilst we are given that it doesn’t pay, a real barrister will see that this is either lawless, trivial or too hard to prove. And he will give you that answer. And that is the reality of things. Considering that ChatGPT is any kind of solution makes you eligible for the Darwin award. It is harsh, but that is the setting we are now in. It is the reality of things that matter and that is not on any of these handlers of AI (as they call it). And I have written about AI several times, so it it didn’t stick, its on you.
Have a great day and don’t let the rain bother you, just fire whomever in media told you it was gonna rain and get a better result.
That is the setting I just ‘woke’ up from. A fair warning that this is all PURE speculation. There are no hidden traps, there is no revelation at the end. All this is speculation.
You see, some will recall the builder.ai setting and there we see “Builder.ai was a smartphone application development company which claimed to use AI to massively speed up app development. The company was based mostly in the United Kingdom and the United States, with smaller subsidiaries in Singapore and India.” At this time we are given “The real catalyst wasn’t technical failure — it was financial mismanagement. According to reports, Builder.ai was involved in a round-trip billing scheme with one of its partners. Essentially, they were allegedly booking fake revenue to make the business look healthier than it was.” And the fact that Microsoft was duped here makes it hilarious. But was it? You see, as I see it AI doesn’t exist (not yet at least) so this setting didn’t make sense, it still doesn’t. Apart from the fact that there were 700 engineers involved (which made the setting weird t say the least) and that was set in a larger space. But what if there was no ‘loss’ for Microsoft? What if builder did exactly hat was required of them? When I got that thought, another beeped up. What if this setting was a mere pilot? You see, there are data issues (all over the place) and Microsoft knows this. What if these 700 engineers were setting the larger premise. What if this is the premise that Sam Altman needs? What if the enablement the is caused between Sam Altman and Satya Nadella and their needs? What if that setting isn’t merely data, but programmers? What if OpenAI is capturing all the work created by programmers? You see, data can be collected, capturing the work of programmers is a little different and OpenAI gets at present “OpenAI is set to hit 700 million weekly active users for ChatGPT this week”, as far as I can tell 90% is simple rubbish, but that 10% are setting their fingerprints on the programming of the future. And whilst this is going on, the ChatGPT funnels are working overtime. As such these programers are pushing themselves out of a job (well not exactly) they still have jobs in several places, but the winners here is team Altman/Nadella. They are about to clean house and when the bulk of the programmers is captured, automated program settings are realised. It isn’t AI, but the people will treat it as much. And this setting is really brilliant. We all contributed to a new version of Near Intelligent Parsing. One that has the frontlines of the crowds, millions of them. And no-one is the wiser as such.
Perhaps some are and they do not care. They will have their own partitions on this all and the setting will regurgitate their logic and as such they will be the cash makers in the house. So, we are pricing ourselves out of a jobs, out of many jobs. But as I said, this is merely speculative and I have no evidence of any kind. Yet this was the setting I see coming.
Now, let see if I can dream lovely dreams involving a lovely lady, not an Grok imaginative lady of the night. You know what I mean, Twitter is filled with them at present.
Have a great day, it’s 5:00 in the morning in Vancouver, I’m almost seeing Monday morning, less than 2 hours to go.
That is the setting we need to move towards and that moment will be now. It started with a simple setting, the map of Europe and the alleged accusation is seen below.
I cannot vouch for the setting, but as you see, in most languages it makes little sense. So when any AI fumbles (and that WILL happen) the ball the damage will be a lot bigger. We hear all these ‘delusional denials’ like ‘We will prepare for that’ and ‘it can’t happen to us’ you merely need to look at the Builder.ai setting and how they used 700 engineers to allegedly ‘fool’ Microsoft who backed it to the extent of a billion dollar plus. So when the ‘bigger’ players also get caught with their pants on their ankles we will have a totally new setting. As such I thought of going back to the roots of technology. Optionally as an educational setting, an optional simulator to inspire the youn to think and become creative for themselves without any AI system fumble their thinking patterns. It might not be the most eloquent setting, but creativity cannot be set in AI, as AI doesn’t exist (yet) and before it is too late, we need to create other outlets for creativity to emerge. I still like the setting that Ubisoft gave us with Assassins Creed Origins. In one of the expansions you are taken to the Tours: Beer & Bread. It shows that Egyptians ‘perfected’ the fermentation process. In my youth (a very long time ago) I went to the Open-air Museum in Arnhem (Netherlands) and this one building still reverberates in my mind over half a century later. It was a paper mill.
On the outside it doesn’t seem like much, a lot like a really old building, but that is the hidden part. Inside there is a completely operational paper mill and it is fueled by waterpower. Now you might think that this is too old.
But consider that Nobel invents Dynamite for the simple need of mining, Apparently Viagra had a completely different stage. It takes one mind to think “What if we did this?” and that is the ball game. That is the setting that creates new technologies. We need to get back to the old ways. And I use the paper mill as an example. Consider the Amish (all over America) who have been doing it there way for centuries. Consider how they have no fridges, or non electrical ones. We need to reconsider what we know and what is possible without some idiot telling us how to do it, because these people will come out of the woodworks pretending to voice the deities they pretend to follow (for their personal good).
Consider that paper mill and what to do when water stops flowing. A wind vane? Giving people the idea to take the next step. And at some point power will become an issue. We see now new ways to tarmac roads making them safer, the Netherlands are exploring illuminating forms of tarmac, making electricity less of a essential need. We see all kinds of innovations and as you think it is all covered, consider that in Australia ‘relied’ on ChatGPT (as one source stated) to phrase the law and it used non-existing cases. So how do you like your chestnuts boiled in that gravy?
The one option is to revert to earlier settings and consider what is possible without others telling us what to do. A lot will not work, but some will be true innovative steps. And that is the ballgame. As what some call AI is telling us where to go and especially where not to go we lose the creativity we have, or merely fashion it in the way other want it to be fashioned.
That is not innovation, that is pack mentality.
So what stages in other fields were short cut, because it never supported the then innovative choosers? We need to protect ourselves and the evidence is all over the historical buildings. The romans had two tiered bathhouses making hot water. So even as we now think that we do better, consider what happens when electricity falls away because 500,000 systems took it away fueling their AI systems taking over 250,000 times more energy than one simple brain does.
We need to protect what is and what was, before others remove that way of thinking from us and we can go about it in different ways, I ikon that none of them are incorrect. Another example can be seen in the old pyramids. We were given (in YouTube) “Ancient Egyptian “pyramid basalt roads” refer to a network of paved roads, including the world’s oldest known paved road, that connected basalt quarries in the Fayum region to the pyramid fields like Giza. These roads, often paved with sandstone, limestone, and even petrified wood, were used to transport massive basalt blocks, likely for paving the pyramid complexes and temples. One significant road, leading from the Widan el-Faras quarry to the shores of a now-vanished lake, represents a major engineering feat from the Old Kingdom period.” I don’t believe the hype behind it, but these roads and pavements are massive undertakings that even today are unlikely to be this perfect, apart from the settings that they seemingly lacked the tools to create these slabs and make them fit this perfectly. I am not all onboard of this, but like the Game of thrones ‘Wildfire’ we see that this reflects on what was Greek Fire and it came from Byzantine. “With the decline of the Byzantine Empire, their recipe for the production of liquid fire was lost, the last documented use of Byzantine fire was in 1187. After Constantinople fell to the Ottomans, several attempts to imitate the Greek Fire were made, but none replicated the original.” So something created 1000 years ago can no longer be reproduced? I reckon that this is one of the most direct forms of creativity lost. And the fact that it has military applications implies that plenty of governments tried to get it on their side.
As such I think we need to create genuine systems to invoke creativity in the next generation before it is all lost and we all go ‘Duh!’ At the next innovation blaming it on magic and as Vernon Dursley once said “there is no such thing as magic” as I see it, magic is blamed when we no longer comprehend the technology (like the White House and 5G technology, which comes with a small giggle from me).
So the short setting is Protect the next generation now as there is no longer any later.
That is at time the saying, it isn’t always ‘meant’ in a positive sight and it is for you to decide what it is now. The Deutsche Welle gave me yesterday an article that made me pause. It was in part what I have been saying all along. This doesn’t mean it is therefor true, but I feel that the tone of the article matches my settings. The article (at https://www.dw.com/en/german-police-expands-use-of-palantir-surveillance-software/a-73497117) giving us ‘German police expands use of Palantir surveillance software’ doesn’t seem too interesting for anyone but the local population in Germany. But that would be erroneous. You see, if this works in Germany other nations will be eager to step in. I reckon that The Dutch police might be hopping to get involved from the earliest notion. The British and a few others will see the benefit. Yet, what am I referring to?
It sounds that there is more and there is. The article’s byline gives us the goods. The quote is “Police and spy agencies are keen to combat criminality and terrorism with artificial intelligence. But critics say the CIA-funded Palantir surveillance software enables “predictive policing.”” It is the second part that gives the goods. “predictive policing” is the term used here and it supports my thoughts from the very beginning (at least 2 years ago). You see, AI doesn’t exist. What there is (DML and LLM) are tools, really good tools, but it isn’t AI. And it is the setting of ‘predictive’ that takes the cake. You see, at present AI cannot make real jumps, cannot think things through. It is ‘hindered’ by the data it has and that is why at present its track record is not that great. And there are elements all out there, there is the famous Australian case where “Australian lawyer caught using ChatGPT filed court documents referencing ‘non-existent’ cases” there is the simple setting where an actor was claimed to have been in a movie before he was born and the lists goes on. You see, AI is novel, new and players can use AI towards the blame game. With DML the blame goes to the programmer. And as I personally see “predictive policing” is the simple setting that any reference is made when it has already happened. In layman’s terms. Get a bank robber trained in grand theft auto, the AI will not see him as he has never done this. The AI goes looking in the wrong corner of the database and it will not find anything. It is likely he can only get away with this once and the AI in the meantime will accuse any GTA persona that fits the description.
So why this? The simple truth is that the Palantir solution will safe resources and that is in play. Police forces all over Europe are stretched thin and they (almost desperately) need this solution. It comes with a hidden setting that all data requires verification. DW also gives us “The hacker association Chaos Computer Club supports the constitutional complaint against Bavaria. Its spokesperson, Constanze Kurz, spoke of a “Palantir dragnet investigation” in which police were linking separately stored data for very different purposes than those originally intended.” I cannot disagree (mainly because I don’t know enough) but it seems correct. This doesn’t mean that it is wrong, but there are issues with verification and with the stage of how the data was acquired. Acquired data doesn’t mean wrong data, but it does leave the user with optional wrong connections to what the data is seeing and what the sight is based on. This requires a little explanation.
Lets take two examples In example one we have a peoples database and phone records. They can be matched so that we have links.
Here we have a customer database. It is a cumulative phonebook. All the numbers from when Herr Gothenburg got his fixed line connection with the first phone provider until today, as such we have multiple entries for every person, in addition to this is the second setting that their mobiles are also registered. As such the first person moved at some point and he either has two mobiles, or he changed mobile provider. The second person has two entries (seemingly all the same) and person moved to another address and as such he got a new fixed line and he has one mobile. It seems straight forward, but there is a snag (there always is). The snag is that entry errors are made and there is no real verification, this is implied with customer 2, the other option is that this was a woman and she got married, as such she had a name change and that is not shown here. The additional issue is that Müller (miller), is shared by around 700,000 people in Germany. So there is a likelihood that wrongly matched names are found in that database. The larger issue is that these lists are mainly ‘human’ checked and as such they will have errors. Something as simple as a phonebook will have its issues.
Then we get the second database which is a list of fixed line connections, the place where they are connected and which provider. So we get additional errors introduced for example, customer 2 is seemingly assumed to be a woman who got married and had her name changed. When was that, in addition there is a location change, something that the first database does not support as well as she changed her fixed line to another provider. So we have 5 issues in this small list and this is merely from 8 connected records. Now, DML can be programmed to see through most of this and that is fine. DML is awesome. But consider what some called AI and it is done on unverified (read: error prone) records. It becomes a mess really fast and it will lead to wrong connections and optionally innocent people will suddenly get a request to ‘correct’ what was never correctly interpreted.
As such we get a darker taint of “predictive policing” and the term that will come to all is “Guilty until proven innocent” a term we never accepted and one that comes with hidden flaws all over the field. Constanze Kurz makes a few additional setting, settings which I can understand, but also hindered with my lack of localised knowledge. In addition we are given “One of these was the attack on the Israeli consulate in Munich in September 2024. The deputy chairman of the Police Union, Alexander Poitz, explained that automated data analysis made it possible to identify certain perpetrators’ movements and provide officers with accurate conclusions about their planned actions.” It is possible and likely that this happens and there are intentional settings that will aide, optionally a lot quicker than not using Palantir. And Palantir can crunch data 24:7 that is the hidden gem in this. I personally fear that unless an accent to verification is made, the danger becomes that this solution becomes a lot less reliable. On the other hand data can be crushed whilst the police force is snoring the darkness away and they get a fresh start with results in their inbox. There is no doubt that this is the gain for the local police force and that is good (to some degree). As long as everyone accepts and realizes that “predictive policing” comes with soft spots and unverifiable problems and I merely am looking at the easiest setting. Add car rental data with errors from handwritings and you have a much larger problem. Add the risk of a stolen or forged drivers license and “predictive policing” becomes the achilles heel that the police wasn’t ready for and with that this solution will give the wrong connections, or worse not give any connection at all. Still, Palantir is likely to be a solution, if it is properly aligned with its strengths and weaknesses. As I personally see it, this is one setting where the SWOT solution applies. Strengths, Weaknesses, Opportunities, and Threats are the settings any Palantir solution needs and as I personally see it, Weakness and Threats require its own scenario in assessing. Politicians are likely to focus on Strength and Opportunity and diminish the danger that these other two elements bring. Even as DW gives us “an appeal for politicians to stop the use of the software in Germany was signed by more than 264,000 people within a week, as of July 30.” Yet if 225,000 of these signatures are ‘career criminals’ Germany is nowhere at present.
Have a great day. People in Vancouver are starting their Tuesday breakfast and I am now a mere 25 minutes from Wednesday.
That is the setting as I personally believe it to be. The problem isn’t me, the problem is that politicians are clueless and as such the people will end up suffering. As we get the article (at https://www.theguardian.com/technology/2025/jul/30/zuckerberg-superintelligence-meta-ai) telling us ‘Zuckerberg claims ‘super-intelligence is now in sight’ as Meta lavishes billions on AI’ the dwindling situation is overlooked. This is not on Meta or on Mark the innovator Zuckerberg, well, perhaps it is a little on him. But the setting of “Whether it’s poaching top talent away from competitors, acquiring AI startups or proclaiming that it will build data centers the size of Manhattan, Meta has been on a spending spree to boost its artificial intelligence capabilities for months now”. So, what are you missing? It is easy to miss it and unless you are savvy in data, there is absolutely no blame on you. I will blame politicians shoving the buck to a pile that has no representation and I do see that the political mind is merely ‘money savvy’, it does not have an alleged clue on data verification. There is a second point, it was given to me by someone (I don’t remember who) who gives us “All AI startups are their own shells linking to ChatGPT” I see the wisdom of that, but I never investigated that myself. You see, all these shells have issues with verification and these startups don’t have the resources to properly verify the data they have, so you end up having a bucket with badly arranged and misliked data. You would think that if they all link to ChatGPT it is a singular issue, but it is not. Language is one, interpretation of what is, is another side and these are merely two sides in a much larger issue. And hiding behind “build data centers the size of Manhattan” is nothing else than a massive folly. You see, what will power this? Most places in this world have a clear shortage of power and any data centre relying on power that isn’t there will crash with some regularity and these data links are maintained in real time, so links will go wrong again and again. And that link is seen by ‘some’ as “A new study of a dozen A.I. -detection services by researchers at the University of Maryland found that they had erroneously flagged human-written text as A.I. -generated about 6.8 percent of the time, on average” that implies that 1 in 15 statements are riddles with errors and there is no way around it until the verification passes are sorted out. Consider that one source gives us “monthly searches to more than 30.4 million during the last month”, this gives us that AI events resulted in 2,026,666 possible erroneous results and when that happens to something that was essential to your needs? When technical support and customer care fails because the number, aren’t right? How long will you remain a customer? That is the folly I am foreseeing and when all these firms (like Microsoft) are done shedding their people and they realise that the knowledge they actually had was pushed out of the side door? Where does this leave the customers? Will they remain Microsoft, Amazon, IBM or Google customers? This is about to hit nearly every niche in America business. The ones that held on the their people knowledge base tend to be decently safe, but the resources needed to clean up the mess that this created will scuttle the European and American economies as they overextended the new they spun themselves and when reality catches up, these people will see the dark light of a self created nightmare.
So in retrospect consider “Behind the hype of Microsoft backing and a $1B+ valuation, the company reportedly inflated numbers, burned through ~$450M funding, and collapsed into insolvency.” This setting was hyped on every channel and praised as a solution. It took less then a year to go from a billion to naught. How many even have a billion? Considering that Microsoft backed it, implies that they were unaware how they were, driven by a simple setting that should have been verified before they even backed it to over a $1,000,000,000 plus.
Now, we can feel sorry for Zuckerberg, not for the money, he probably has more in his wallet, but the ones wanting in on such a ‘great endeavor’ are bound to lose everything they own. This is a very slippery slope and as governments are seeing what some call as AI as a solution to solve a expensive setting in a cheap way are likely to lose the ownership of data of their entire population and these systems do not care who the owner is, they copy EVERYTHING. So where will that data end up going? I wonder who looked at the ownership of collected data and all the errors it has within itself.
The fear is not what it costs, but for billions of people is where their information will end up being and these politicians sell ‘sort of solutions’ which they cannot back with facts and in the end it will end up being the problem of a software engineer and that setting was too complicated to understand for any politician who was too eager to put his name under this and merely will shrug saying ‘I’m sorry’ whilst he is exiting through any side door with his personal wallet filled to the brink to a zero tax nation with a non-extradition treaty.
A setting we will see the media repeat time after time without seriously digging into the mess as they told us “Wall Street investors are happy with the expensive course Zuckerberg is charting. After the company reported better-than-expected financial results for yet another quarter, its stock soared by double digits.” All whilst the statement “Zuckerberg did not provide any details of what would qualify as “super-intelligence” versus standard artificial intelligence, he did say that it would pose “novel safety concerns”. “We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source,”” is trivialized to the largest degree and in all this there is no setting of verification. Weird isn’t it?
So feel free to enjoy you cub of toffee and don’t worry about the jacked setting of demonstration which was tracked by the original AI as “enjoy your cup of coffee and don’t worry about the impact of verification” because that is the likely heading of the coming super-intelligence
That is where I saw myself. Thinking of a near forgotten great. It was 1935 and a relatively unknown director (in those days) it was his second movie for a new firm and that movie, the setting have been burned into my memory for over half a century. It takes a lot for something to happen to anyone. I am talking about the 39 steps. Even in 1935 the dangers of industrial espionage were seen as monumental and today this is worse. So as I see it. Based on the book by John Buchan, 1st Baron Tweedsmuir the movie by many that matter is seen as an absolute masterpiece (one of them is Orson Welles). As such I see a setting where someone can sort of rewrite the story to be more contemporary. The indication quote given in the movie is “The 39 Steps is an organization of spies collecting information on behalf of the foreign office of…” and we never heard the end because mr Memory was shot at that point in during the public performance. The act out of clear fear is a setting that should not be underestimated. Now, I would love to have a bite at that, but I already have three more running originals, one is a miniseries, one is a story in three seasons and one is an open endeavor spanning 3-4 seasons for now. As such my hands are full and the first work hasn’t even been sold yet. That one is a movie meant for Arabic streaming channels. As such, I need to hand it over to someone who feels frisky to go up against a great like Alfred Hitchcock. Trying to equal this masterpiece is already a herculean task, surpassing it will be close to impossible, but do try, I challenge you.
Consider the settings we have now. NIP (Near Intelligent Parsing), ‘AI’ advertisements by Facebook (or Meta) and that is merely the start. We have woke ‘idiots’ and we have religious nuts, take in measure the settings of a political administration that shoots itself in the foot, the disability of acting out against Russia and everyone is considering the yellow peril (aka China) setting the new frontier. All elements that can make a massive impact in a storyline.
So as we consider the IP that is starting to make waves (Hyper-loop, AI (aka NIP)) and that is intertwining in western, eastern and Arabic settings. If that doesn’t make for a compelling story, it is out of my hands. Oh, and before you think it is merely governments. Consider the settings that Google, Microsoft, Meta, TikTok and Huawei take on the global stage. And they all want the same thing whilst aiming for similar goals.
I think there is enough space for a rewrite of the 39 steps, the politics, business and technology are setting the stage that all want to ‘enable’ empowering that setting. And even as the 1935 original was merely implying that setting. Consider that we were given this month “Chinese theft of American IP currently costs between $225 billion and $600 billion annually.” Yet that is a two way stream. As I see it, the west has close to nothing to counter the innovations of Huawei and there is more. So what happens when a ‘dedicated’ corporation merely sets the goals towards profits and become the axial of all this? A sort of SMERSH in real life (like Bond faced in the 60’s) but there is no need for Mr. Memory. So what happens when data sets are given to OpenAI (or ChatGPT) and that system links (falsely or not) the parts that matter? So what happens to the overseers of such a system? I am merely opening doors for someone to pick up the quill and parchment (a laptop is so passé) but the idea comes across I hope. And considering last week, news with the alleged hacks by Violet Typhoon, this movie plot could thicken.
So what happens when that is the setting towards the conclusion, the middle is the start and during the movie you get start to middle in segments and that goes towards the conclusion of someone getting to the end of the story. My idea is that this could make a magnificent movie with a woman in the lead. Perhaps Florence Pugh, Jenna Ortega, Sydney Sweeney, Anya Taylor-Joy, Saoirse Ronan, or Elle Fanning could be cast. You see, this needs to be a ‘younger’ actress. As I see it under 30. A side story would be that she is into climbing, a loner that is driven to succeed in her IT/Consultancy job. Oh and these are merely characters I know off, there are plenty of actresses that could apply. I am merely thinking of the type, not the exact character and I think that this is meant for the one taking up the baton. Would be great if a Canadian or Nordic picks up that challenge. And I got a few more ideas. It could be set against actual political events of 2025. As I see it, this movie makes a massive impact if the movie starts with:
This movie is entirely fictional, any resemblance to actual persons or events is purely coincidental
As a wink to the 1932 claim against MGM is entertaining enough, but when you base this on 2025 events it will gain traction by millions of conspiracy theorists who will drive the movie along making a lot more interested in seeing this work.
A simple setting that should make Alfred Hitchcock wink at the writer and director, in equal measure Orson Welles would applaud the setting as it is a wink towards The Night That Panicked America and the October 30 1938 broadcast it was based upon. It was one hell of a peekaboo.
I reckon this can be done again and nicely on the bog screen. So if you reckon to be a script writer, here is your chance.
Have a great day. Another fine idea released before Monday morning.
That is at times the issue, I would add to this “especially when we consider corporations the size of Microsoft” but this is nothing directly on Microsoft (I emphasize this as I have been dead set against some ‘issues’ Microsoft dealt us to). This is different and I have two articles that (to some aspect) overlap, but they are not the same and overlap should be subjectively seen.
The first one is BBC (at https://www.bbc.com/news/articles/c4gdnz1nlgyo) where we see ‘Microsoft servers hacked by Chinese groups, says tech giant’ where the first thought that overwhelmed me was “Didn’t you get Azure support arranged through China?” But that is in the back of my mind. We are given “Chinese “threat actors” have hacked some Microsoft SharePoint servers and targeted the data of the businesses using them, the firm has said. China state-backed Linen Typhoon and Violet Typhoon as well as China-based Storm-2603 were said to have “exploited vulnerabilities” in on-premises SharePoint servers, the kind used by firms, but not in its cloud-based service.” I am wondering about the quote “not in its cloud-based service” I have questions, but I am not doubting the quote. To doubt it, one needs to have in-depth knowledge and be deeply versed in Azure and I am not one of these people. As I personally see it, if one is transgressed upon, the opportunity rises to ‘infect’ both, but that might be my wrong look on this. So as we are given ““China firmly opposes and combats all forms of cyber attacks and cyber crime,” China’s US embassy spokesman said in a statement. “At the same time, we also firmly oppose smearing others without solid evidence,” continued Liu Pengyu in the statement posted on X. Microsoft said it had “high confidence” the hackers would continue to target systems which have not installed its security updates.” This makes me think about the UN/USA attack on Saudi Arabia regarding that columnist no one cares about, giving us the ‘high confidence’ from the CIA. It sounds like the start of a smear campaign. If you have evidence, present the evidence. If not, be quiet (to some extent).
We then get someone who knows what he in talking about “Charles Carmakal, chief technology officer at Mandiant Consulting firm, a division of Google Cloud, told BBC News it was “aware of several victims in several different sectors across a number of global geographies”. Carmakal said it appeared that governments and businesses that use SharePoint on their sites were the primary target.” This is where I got to thinking, what is the problem with Sharepoint? And when we consider the quote “Microsoft said Linen Typhoon had “focused on stealing intellectual property, primarily targeting organizations related to government, defence, strategic planning, and human rights” for 13 years. It added that Violet Typhoon had been “dedicated to espionage”, primarily targeting former government and military staff, non-governmental organizations, think tanks, higher education, the media, the financial sector and the health sector in the US, Europe, and East Asia.”
It sounds ‘nice’ but it flows towards the thoughts like “related to government, defence, strategic planning, and human rights” for 13 years”, so were was the diligence to preventing issues with Sharepoint and cyber crime prevention? So consider that we are given “SharePoint hosts OneDrive for Business, which allows storage and synchronization of an individual’s personal work documents, as well as public/private file sharing of those documents.” That quote alone should have driven the need for much higher Cyberchecks. And perhaps they were done, but as I see it, it has been an unsuccessful result. It made me (perhaps incorrectly) think so many programs covering Desktops, Laptops, tablets and mobiles over different systems a lot more cyber requirements should have been in place and perhaps they are, but it is not working and as I see, it as this solution has been in place for close to 2 decades, the stage of 13 years of attempted transgression, the solution does not seem to be safe.
And the end quote “Meanwhile, Storm-2603 was “assessed with medium confidence to be a China-based threat actor””, as such, we stopped away from ‘high confidence’ making this setting a larger issue. And my largest issue is when you look to find “Linen Typhoon” you get loads of links, most of them no older than 5 days. If they have been active for 13 years. I should have found a collection of articles close to a decade old, but I never found them. Not in over a dozen of pages of links. Weird, isn’t it?
The next part is one that comes from TechCrunch (at https://techcrunch.com/2025/07/22/google-microsoft-say-chinese-hackers-are-exploiting-sharepoint-zero-day/) where we are given ‘Google, Microsoft say Chinese hackers are exploiting SharePoint zero-day’ and this is important as a zero-day, which means “The term “zero-day” originally referred to the number of days since a new piece of software was released to the public, so “zero-day software” was obtained by hacking into a developer’s computer before release. Eventually the term was applied to the vulnerabilities that allowed this hacking, and to the number of days that the vendor has had to fix them.” This implies that this issue has been in circulation for 23 years. And as this implies that there is a much larger issue as the software solution os set over iOS, Android and Windows Server. Microsoft was eager to divulge that this solution is ‘available’ to over 200 million users as of December 2020. As I see it, the danger and damage might be spread by a much larger population.
Part of the issues is that there is no clear path of the vulnerability. When you consider the image below (based on a few speculations on how the interactions go)
I get at least 5 danger points and if there a multiple servers involved, there will be more and as we are given “According to Microsoft, the three hacking groups were observed exploiting the zero-day vulnerability to break into vulnerable SharePoint servers as far back as July 7. Charles Carmakal, the chief technology officer at Google’s incident response unit Mandiant, told TechCrunch in an email that “at least one of the actors responsible” was a China-nexus hacking group, but noted that “multiple actors are now actively exploiting this vulnerability.”” I am left with questions. You see, when was this ‘zero day’ exploit introduced? If it was ‘seen’ as per July 7, when was the danger in this system solution? There is also a lack in the BBC article as to properly informing people. You cannot hit Microsoft with a limited information setting when the stakes are this high. Then there is the setting of what makes Typhoon sheets (linen) and the purple storm (Violet Typhoon) guilty as charged (charged might be the wrong word) and what makes the March 26th heavy weather guilty?
I am not saying they cannot be guilty, I am seeing a lack of evidence. I am not saying that the people connecting should ‘divulge’ all, but more details might not be the worst idea. And I am not blaming Microsoft here. I get that there is (a lot) more than meets the eye (making Microsoft a Constructicon) But the lack of information makes the setting one of misinformation and that needs to be said. The optional zero day bug is one that is riddles of missing information.
So then we get to the second article which also comes from the BBC (at https://www.bbc.com/news/articles/czdv68gejm7o) given us ‘OpenAI and UK sign deal to use AI in public services’ where we get “OpenAI, the firm behind ChatGPT, has signed a deal to use artificial intelligence (AI) to increase productivity in the UK’s public services, the government has announced. The agreement signed by the firm and the science department could give OpenAI access to government data and see its software used in education, defence, security, and the justice system.” Microsoft put billions into this and this is a connected setting. How long until the personal data of millions of people will be out in the open for all kinds of settings?
So as we are given “But digital privacy campaigners said the partnership showed “this government’s credulous approach to big tech’s increasingly dodgy sales pitch”. The agreement says the UK and OpenAI may develop an “information sharing programme” and will “develop safeguards that protect the public and uphold democratic values”.” So, data sharing? Why not get another sever setting and the software solution is also set to the government server? When you see some sales person give you that there will be ‘additional safeties installed’ know that you are getting bullshitted. Microsoft made similar promises in 2001 (code red) and even today the systems are still getting traversed on and those are merely the hackers. The NSA and other America governments get near clean access to all of it and that is a problem with American based servers and still here, there is only so much that the GDPR (General Data Protection Regulation) allows for and I reckon that there are loopholes for training data and as such I reckon that the people in the UK will have to set a name and shame setting with mandatory prosecution for anyone involved with this caper going all the way up to Prime Minister Keir Starmer. So when you see mentions like ““treasure trove of public data” the government holds “would be of enormous commercial value to OpenAI in helping to train the next incarnation of ChatGPT”” I would be mindful to hand or give access to this data and not let it out of your hands.
This link between the two is now clear. Data and transgressions have been going on since before 2001 and the two settings when data gets ‘trained’ we are likely to see more issues and when Prime Minister Keir Starmer goes “were sorry”, you better believe that the time has come to close the tap and throw Microsoft out of the windows in every governmental building in the Commonwealth. I doubt this will be done as some sales person will heel over like a little bitch and your personal data will become the data of everyone who is mentionable and they will then select the population that has value for commercial corporations and the rest? The rest will become redundant by natural selection according to value base of corporations.
I get that you think this is now becoming ‘conspiracy based’ settings and you resent them. I get that, I honestly do. But do you really trust UK Labor after they wasted 23 billion pounds on an NHS system that went awry (several years ago). I have a lot of problems showing trust in any of this. I do not blame Microsoft, but the overlap is concerning, because at some point it will involve servers and transfers of data. And it is clear there are conflicting settings and when some one learns to aggregate data and connect it to a mobile number, your value will be determined. And as these systems interconnect more and more, you will find out that you face identity threat not in amount of times, but in identity theft and value assessment in once per X amount of days and as X decreases, you pretty much can rely on the fact that your value becomes debatable and I reckon this setting is showing the larger danger, where one sees your data as a treasure trove and the other claims “deliver prosperity for all”. That and the diminished setting of “really be done transparently and ethically, with minimal data drawn from the public” is the setting that is a foundation of nightmares mainly as the setting of “minimal data drawn from the public” tends to have a larger stage. It is set to what is needed to aggregate to other sources which lacks protection of the larger and and when we consider that any actor could get these two connected (and sell on) should be considered a new kind of national security risk. America (and UK) are already facing this as these people left for the Emirates with their billions. Do you really think that this was the setting? It will get worse as America needs to hang on to any capital leaving America, do you think that this is different for the UK? Now, you need to consider what makes a person wealthy. This is not a simple question as it is not the bank balance, but it is an overlap of factors. Consider that you have 2000 people who enjoy life and 2000 who are health nuts. Who do you think is set to a higher value? The Insurance person states the health nut (insurance without claims) or the retailer the people who spend and life live. And the (so called) AI system has to filter in 3000 people. So, who gets to be disregarded from the equation? And this cannot be done until you have more data and that is the issue. And the quotation is never this simple, it will be set to thousands of elements and these firms should not have access, as such I fear for the data making it to the outer UK grounds.
A setting coming from overlaps and none of this is the fault of Microsoft but they will be connected (and optionally) blamed for all this, but as I personally see it the two elements that matter in this case are “Digital rights campaign group Foxglove called the agreement “hopelessly vague”” and “Co-executive Director Martha Dark said the “treasure trove of public data” the government holds” will be of significance danger to public data, because greed driven people tend to lose their heads over words like ‘treasure trove’ and that is where ‘errors are made’ and I reckon it will not take long before the BBC or other media station will trip up over the settings making the optional claim that ‘glitches were found in the current system’ and no one was to blame. Yet that will not be the whole truth will it?
So have a great day and consider the porky pies you are told and who is telling them to you, should you consider that it is me. Make sure that you realise that I am merely telling you what is out in the open and what you need to consider. Have a great day.