Tag Archives: Dario Amodei

What is real?

That is at times the question, the setting that someone is trying to give us fake. Now I am a most outspoken person in regards to AI, it doesn’t exist (yet) and whilst the media is all about AI (for their digital dollars), the real setting is when it will arrive. No matter how clever programmers become, it is still a programmers Wild Wild West. So when I took notice of the BBC (at https://www.bbc.com/audio/play/w3ct8mf3) I had different questions. We are given “Anthropic – one of Silicon Valley’s leading AI firms – recently announced that they have built a model which is too dangerous to be released to the public. Instead, they are only giving access to the model to a handful of big companies, to help them find security vulnerabilities.The company says the model has already found weak spots in “every major operating system and web browser”. Is this a genuine example of a company acting responsibly, or more of a carefully calibrated publicity move?” OK, the premise seems clear, whatever they call AI, let’s call it Fake AI might have become a tad more potent and giving it to a chosen few might be the way to go. I personally would advice Dario Amodei to talk to IBM, this is not some prearranged setting. As far as I know IBM is the most advanced player for Shallow Circuits and that is one of the thresholds to get to Real AI, until that moment comes all AI is fake. Optionally he should talk to Google too, as I have no idea how far their shallow circuits are. But it is one of the three remaining thresholds before we can get to a Real AI setting. The other one’s are the Trinary Operating System and the other is decent weeding (like removing arranged data from verifiable data) We already have quantum technology, so that is on par. The weeding part comes I reckon when shallow circuits are done, m because when we combine this with the TOS (my personal gag here and I am giggling) we have the makings of perfect data dirt weeding. But the setting also evokes other thoughts. If Anthropic is this far ahead, what the hell is Sam Altman doing with all the billions is is seemingly squandering. You see ‘OpenAI to spend over $20 bln on Cerebras chips’. I am not debating the setting, it might be the strongest there is (for now), but if this market is thrown upside down in less than a decade, it implies that Sam Altman just wasted billions on chips that are basically obsolete by the end of the year. And in that same setting the quote “OpenAI is valued at approximately $852 billion”, what will be left of that when 2027 comes calling? I have supporting ideas. If Anthropic is ahead of OpenAI, as I reckon is Google, who will pay $852 billion for a third place setting? And in addition we know that DeepSeek is out there, but no one knows how far ahead of lagging it is. What was old it can do so at a much lower cost and when did business walk away from cost reductions?

All thoughts that come to mind and the media is weirdly unaware of them, so who are they working for? Not the audience that is seemingly clear. But if you want to dismiss my calling, that is fair. So few free to investigate your own data and don’t use one source, use at least half a dozen sources and when you do you will figure out that the equations and the money drop is not evening out. It is all reminiscent of the 90’s where people will pay mountains for mere concepts. I thought we had done away with those settings? 

Still, the current call is with Anthropic and Dario Amodei. I wonder how quickly we will see an update on how that is going. I am sure it might take several weeks, but in the meantime we can consider did OpenAI overtake Google Gemini yet? If so by how much and if not, what are these headlines of chips for billions, when Lays has them for $3.99 (ketchup taste optional).

And yes 20,000,000,000 is a real number, but so is the return on investment and where is that number with OpenAI? What is his return on investment? As such have a lovely day and if you are not investing in FakeAI try enjoying your coins in acquiring some coffee or tea, they both tend to wake up the senses.

Leave a comment

Filed under Finance, IT, Media, Science

Confusion speaks its mind

So here I was, one day in the past and I see a BBC article. I saw the headline, I saw the ‘bully approach’ and initially I ignored it. It was not the BBC, there was no setting that seemingly truly interested me. I was thinking of a few settings towards IP that could give Apple (and optionally Meta) a nice boost. As I was mulling over the ideas I was having, in comes the CBC about 10 hours ago, or better stated I noticed their article and now something clicks in my mind. I started rereading the two articles. The BBC (at https://www.bbc.com/news/articles/cn48jj3y8ezo) gives us ‘Trump orders government to stop using Anthropic in battle over AI use’ with ““We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote in a Truth Social post on Friday.” Of course if he doesn’t want it, there must be a good reason why people might want to use it and we are given “Anthropic is mired in a row with the White House after refusing demands that it agree to give the US military unfettered access to its AI tools. The refusal led US Defence Secretary Pete Hegseth to say he’s deemed Anthropic a “supply chain risk”.” And we are given the quandary that there should be some clarity. The idea that the US Military has unrestrained or uninhibited access to any AI is dangerous. And that is merely to look at it from THEIR point of view. We saw over the last 5 years a few examples where Pentagon staff used whatever USB key they had optionally opening their systems to backdoors and this can result in several ways where the Pentagon would be affected including: Human Interface Device (HID) Spoofing, Malware Infection via Social Engineering, Exploiting OS Vulnerabilities or Juice Jacking (Compromised Public Ports/Cables) and a few other ways. Even in this decade more than one system seemingly ended up on the danger list. So, ‘someone’ now wants to grant AI unfettered access which opens the doors to AI accessing data involves sophisticated, automated, and often, continuous interaction between intelligent systems and vast data sources, including internal corporate databases, cloud storage, and public web content. It constitutes a critical, high-speed, and high-stakes component of the modern AI ecosystem that raises significant security and privacy challenges. And this is not some ‘fear mongering’ There is a lot of AI works that is still to be considered and because AI doesn’t exist and this is all DML on several layers that interact there are dangers to be seen. As we saw a mere week ago that Microsoft had to ‘confess’ that it had accessed confidential emails of Microsoft users. Now consider this happening on a serious level in the Pentagon. It has well over 50,000 desktop computers within its building, with reports from 2014 indicating at least 18,000 were part of specific virtualized infrastructure. Now consider that we have seen the accusation of “Based on reports in early 2025 and 2026, OpenAI has accused Chinese AI startup DeepSeek of “inappropriately” distilling, or copying, the capabilities of OpenAI’s models (specifically ChatGPT and its reasoning models like o1) to train its own competing, low-cost models (such as DeepSeek-R1)”. As such, the dangers of unfettered access can go in two directions and that sets the bar of distilling from the Pentagon a lot lower than anyone could find acceptable. As such there is every chance that Russia is already considering the massive win they could gain once the unfettered access could merely hit one system that was transgressed upon. Because the greedy and the stupid will do anything to propel the setting of self, whilst not caring what others could gain in that setting as well.

So whilst some will consider the dangers of “The company said that “designating Anthropic as a supply chain risk would be an unprecedented action — one historically reserved for US adversaries, never before publicly applied to an American company.” Anthropic said the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”” No one seems to be considering that the opposite is a lot more dangerous. So whilst some focus on the stage of “Anthropic had said it sought narrow assurances from the Pentagon that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons. The Pentagon said it was not interested in such uses and would only deploy the technology in legal ways, but it also insisted on access without any limitations. The government’s effort to assert dominance over the internal decision-making of the company comes amid a wider clash over AI’s role in national security and concerns about how increasingly capable machines could be used in high-stakes situations involving lethal force, sensitive information or government surveillance. Trump said Anthropic made a mistake trying to strong-arm the Pentagon. He wrote on Truth Social that most agencies must immediately stop using Anthropic’s AI but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms.” As I personally see it, it is the accumulation of stupid and technologically ignorant all combined in one package. And that is before we get to mass surveillance. You see combine mass surveillance with data distilling and the United States of America will be handing the data on 349 million Americans straight to China and Russia. This is not AI, this is DML. That means it comes with the hangups and limitations of a programmer. So when this goes wrong it goes wrong in a massive way. 

As such what will people like President Trump and Pete Hegseth say? Do they think that the response ‘Oops’ will cover it?

So whilst CBC (at https://www.cbc.ca/news/business/trump-anthropic-feud-ai-9.7109006) gives us “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials took to social media to chastise Anthropic for failing to allow the military unrestricted use of its AI technology by a Friday deadline, accusing it of endangering national security after CEO Dario Amodei refused to back down over concerns the company’s products could be used in ways that would violate its safeguards.” And this is the setting we expect to see and it will be the undoing of several people, because as I see it “U.S. President Donald Trump, U.S. Defence Secretary Pete Hegseth and other officials” is the start of what comes next. You see, the internet doesn’t forget and these ‘other officials’ have sealed their fate with this action and there is no ‘He told me to do that’ they were instrumental in assisting to hand over the data of the population of the United States of America to optionally both China and Russia. Do you feel safe now?

And in response to this setting we see “The dispute stunned AI developers in Silicon Valley, where venture capitalists, prominent AI scientists and a large number of workers from Anthropic’s top rivals — OpenAI and Google — voiced support for Amodei’s stand in open letters and other forums.” And that should have been a clear message that the competition was on the side of Amodei, so, why would that be? Whilst people in the Pentagon (seemingly) forgot about that router with password ‘Cisco123’ there is every chance that these DML engines will be cleverly distilled by people controlling systems like DeepSeek and whatever the Russians have. I should buy another egg timer, because this is a setting that might gain me a few coins, especially as several people are blind to the danger that is coming for them. And consider one additional setting. It is said that:

So what happens when distilling comes with an additional insertion of data? I can’t wait for that setting to lose balance and the training data in American data centers start losing authentication and reliability markers. But that is  likely a story for another day.

Have a great day today.

Leave a comment

Filed under IT, Law, Media, Military, Politics, Science