Tag Archives: true AI

Ignoring the centre of the pie

That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified. 

To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old. 

But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow. 

So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.

So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Microsoft in the middle

Well, that is the setting we are given however, it is time to give them some relief. It isn’t just Microsoft, Google and all other peddlers handing over AI like it is a decent brand are involved. So the BBC article (at https://www.bbc.com/news/articles/c24zdel5j18o) giving us ‘Microsoft boss troubled by rise in reports of ‘AI psychosis’’ Is a little warped. First things first. What is Psychosis? Psychosis is a setting where we are given “Psychosis refers to a collection of symptoms that affect the mind, where there has been some loss of contact with reality. During an episode of psychosis, a person’s thoughts and perceptions are disrupted and they may have difficulty recognizing what is real and what is not.” Basically the settings most influencers like to live by. Many do this already for for the record. The media does this too.

As such people are losing grips with reality. So as we see the malleable setting that what we see is not real, we get the next setting. As people lived by the rule of “I’ll believe it when I see it” for decades, this is becomes a shifty setting. So whilst people want to ‘blame’ Microsoft for this, as I see it, the use of NIP (Near Intelligent Parsing) is getting a larger setting. Adobe, Google, Amazon. They are all equally guilty.

So as we wonder how far the media takes this?

I’ll say, this far.

But back to the article. The article also gives us “In a series of posts on X, he wrote that “seemingly conscious AI” – AI tools which give the appearance of being sentient – are keeping him “awake at night” and said they have societal impact even though the technology is not conscious in any human definition of the term.” I respond that giving any IT technology a level 8 question (user level) and it responds like it is casually true, it isn’t. It comes from my mindset that states if sarcasm bounces back, it becomes irony.

So whilst we see that setting in ““There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote. Related to this is the rise of a new condition called “AI psychosis”: a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.” It is kinda true, but the most imaginative setting of the use of Grok tends to be 

I reckon we are safe for a few more years. And whilst we pour over the essentials of TRUE AI, we tend to have at least two decades and even then only the really big players can offered it, as such there is a chance the first REAL AI will respond with “我們可以為您提供什麼協助?” As I see it, we are safe for the rest of my life.

So whilst we consider “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.” Consider that law shops and most advocacies give initial free advice, they want to ascertain if it pays to go that way for them. So whilst we are given that it doesn’t pay, a real barrister will see that this is either lawless, trivial or too hard to prove. And he will give you that answer. And that is the reality of things. Considering that ChatGPT is any kind of solution makes you eligible for the Darwin award. It is harsh, but that is the setting we are now in. It is the reality of things that matter and that is not on any of these handlers of AI (as they call it). And I have written about AI several times, so it it didn’t stick, its on you.

Have a great day and don’t let the rain bother you, just fire whomever in media told you it was gonna rain and get a better result.

Leave a comment

Filed under IT, Media, Science