Tag Archives: Shallow Circuits

Ignoring the centre of the pie

That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified. 

To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old. 

But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow. 

So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.

So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Is it a public service

There is a saying (that some adhere to). How often can you slap a big-tech company around for it to be regarded as personal pleasure instead of a public service? There is an answer, but I am not the proper source of that (and I partially disagree). Slapping Microsoft around tends to be a public service no matter how you slice it. Perhaps some people at 92, NE 36th St, Redmond, WA 98052 might start seeing this as their moment to clean up that soiled behemoth. Anyway this all started actually yesterday. I saw an article and I put it next to me. I had other ideas (like actual new IP ideas), but the article was still there this morning and I gave it another look.

The article (at https://www.computerweekly.com/news/366615892/Microsoft-UAE-power-deal-at-centre-of-US-plan-for-AI-supremacy) gives us ‘Microsoft UAE power deal at centre of US plan for AI supremacy’ was hilarious for two reasons. The first is one that academics can agree on There is not (yet) such a setting like AI (Artificial Intelligence) and personally I am smirking at the idea that Microsoft can actually spell the word correctly (howl of deriving laughter by silly old me). And the start of the article gives us “Microsoft has struck an artificial intelligence (AI) energy deal with United Arab Emirates (UAE) oil giant ADNOC after a year of extraordinary diplomacy in which it was the vehicle for a US strategy to prevent a Chinese military tech grab in the Gulf region.” In this I am having the grinning setting that this is one way to give oil supremacy to Aramco and that is merely the beginning of it. And the second was the line “a US strategy to prevent a Chinese military tech grab in the Gulf region” and it is my insight that this is a clicking clock. One tick, one tock leading to one mishap and Microsoft pretty much gives the store to China. And with that Aramco laughingly watches from the sidelines. There is no if in question. This becomes a mere shifting timeline and with every day that timeline becomes a lot more worrying. Now the fist question you should ask is “Could he be wrong?” And the answer is yes, I could be wrong. However the past settings of Microsoft shows me to be correct. And in this all, the funny part to see is that with the absence of AI, the line “a plan to become an AI superpower” becomes folly (at the very least). There are all kinds of spins out there and most are ludicrous. But several sources state “There are several reasons why General AI is not yet a reality. However, there are various theories as to what why: The required processing power doesn’t exist yet. As soon as we have more powerful machines (or quantum computing), our current algorithms will help us create a General AI” or to some extent. Marketing the spin of AI does not make it so. And Quantum computing is merely the start. Then we get the shallow circuit setting and as I personally call it the trinary operating system. You see, all computing is binary and the start of trinary is there. Some Dutch scientist was able to prove the trinary particle (the Ypsilon particle). You see that set in a real computing environment is the goal (for some). The trinary system creates the setting of a achievable real AI. The trinary system has for phases NULL, TRUE, FALSE and BOTH. It is the both part that binary systems cannot do yet, as such any deeper machine learning system is flawed by human interference (aka programming and data errors because of it). This is the timeline moment where we see the folly of Microsoft (et al). 

So then we get to “It also entrenches Microsoft’s place at the crux of the environmental crisis, pledging to help one of the world’s largest oil firms use AI to become a net-zero producer of carbon emissions, while getting help in return in building renewable energy sources to feed the unprecedented demand that the data-centres powering its AI services have for electricity.” OK, not much to say against. This is a business opportunity nicely worded by Microsoft. these are realistic goals that Deeper Machine Learning could do, but that pesky setting gets the novel approach where people (programmers) need to make calls and a call made in the name of AI, still doesn’t make that so. As such when that data error is found, the learning algorithms will need to be retrained. How much time lag does that give? And make no mistake ADNOC will not tolerate these level of errors. It amounts to billions a day and the oil business is cut throat. So when I state that Aramco is sitting on the sideline howling, I was not kidding. That is how I see this develop. Then we get “The same paradox was played out at the COP 28 climate conference in Dubai last December, while Microsoft prepared to ink a $1.5bn investment in UAE state-owned AI and data-centre conglomerate G42, where Sultan Ahmed Al Jaber, ADNOC oil chief, chaired a global agreement to ditch fossil fuels.” This is harder to oppose. It is pretty much an agreement between two parties. However I wonder how the responsibilities of Microsoft are voiced, because it will hang on that and perhaps Microsoft slipped one by ADNOC, but that is neither here or there. You don’t become chief of ADNOC without protecting that company so without the papers I cannot state this will get Microsoft in hot waters. However, I am certain that any boast towards ‘miscommunication’ will hand the stables, the farm and the livestock (aka oil) right in the hands of China. You see, people will focus on the $1.5 billion investment by Microsoft, yet I wonder how much (or how long) the errors are unspotted. That will be an error that could result into billions a day lost and that is something that Microsoft is unlikely to survive. Then there is the third player. You see America angered China with the steps they have taken in the past. And I have no doubt that China will be keeping an eye on all this and whilst some might want to ‘hide’ mishaps. China will be at the forefront of illuminating these mistakes. And these mistakes will rear their ugly heads. They always do and the track record of Microsoft is not that great (especially when millions scrutinise your acts). As such this is a like standing on a hill where the sand is kept stable on a blob of oil, until someone walks that it merely seems stable, the person walking there became the instability of it all. Not the most eloquent expression, but I think it works and Microsoft have been trodding too much already and now China feels grieved (not sure it is a valid feeling) but for China it matters and getting Microsoft to fail will be their only target. Well, that is it all from me and looking at how this will go, I have a nice amount of popcorn ready to watch two players slug it out. In the meantime Sultan Ahmed Al Jaber has merely one thought “Did I deserve what I about to unfold?” And I can’t answer that because it is depending on the papers he co-signed and I never saw these papers, so I cannot give an honest response to that.

Let’s see how this fight unfolds on the media, enjoy your day wherever you are (it is still Friday west of Ireland).

2 Comments

Filed under Finance, IT, Politics, Science

Is it more than buggy?

Very early this morning I noticed something. Apple had made a booboo, now this isn’t a massive booboo and many will hide behind the ‘glitch’ sentiment. But this happened just as I was reading some reports on AI (what they perceive to be AI) and things started to click into place. You see AI (as I have said several times before) does not yet exist. We are short on several parts and yes machine learning and deeper machine learning exist and they are awesome. But there is a extremely dangerous hitch there. It is up to the programmer and programmers are people, they will fail and with that any data model connected will fail, it always will.

So what set this off?
To see this we need to see the image below

It was 01:07 in the morning, just after one o clock. The apple wedge gives us on all 4 timezones that it was today. Vancouver minus 19 hours, making it 06:07 in the morning. Toronto minus 16 hours making it 09:07 in the morning. Amsterdam minus 10 hours making it 15:07 in the afternoon and Riyadh with its minus 8 hours making it 17:07 in the afternoon. And all of them YESTERDAY. Now, we might look at this and think, no biggie and I would agree. But the setting does not en there.

Now we get to the other part. Like hungry all these firms are tying to get you into what they call ‘the AI field’ and their sales people are all pushing that stage as much as they can, because greed is never ending and most sales people live from their commission.

So now we see:

In addition there is Forbes giving us (at https://www.forbes.com/sites/joemckendrick/2024/01/04/not-data-driven-enough-ai-may-change-that/) where we see ‘Not Data-Driven Enough? AI May Change That’ where we are given “Eighty-eight percent of executives said that investments in data and analytics are a top priority, along with 63% for investments in generative AI.” To see my issue we need to take a step back. 

On May 27th 2023 the BBC reported (at https://www.bbc.com/news/world-us-canada-65735769) that Peter LoDuca, the lawyer for the plaintiff got his material from a colleague of his at the same law firm. They relied on ChatGPT to get the brief ready. As such we get: ““Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in an order demanding the man’s legal team explain itself.” Now consider the first part. An affidavit is prepared by the current levels of machine learning and they get the date wrong (see apple example above). An optional mass murderer now gets off on a technicality because the levels of scrutiny are lacking. The last part of the case in court gives us “After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.” A court case for naught and why? Because technology isn’t ready yet, it is that simple. 

The problem is a little bot more complex. You see forecasting exists and it is decently matured, but it is used in the same breath as AI, which does not yet exist. There are (as I personally see it) no checks and balances. Scrutiny on the programmer seemingly goes away when AI is mentioned and that is perhaps the largest flaw of all. 

There is a start, but we are in its infancy. IBM created the quantum computer. It is still early days, but it exists. Lets just say that in quantum computers they created the IBM XT computer of Quantum, with its version of an intel 8088 processor. And compared to 1981 it was a huge step forward. What currently is still missing due to infancy are the shallow circuits, they are nowhere near ready yet. The other part missing is the Ypsilon particle now ready for IT. The concept comes from a Dutch Physicist (I forgot the name, but I mentioned it in previous blogs). I wrote about it on August 8th 2022. In a story called ‘Altering Image’ You see that will change the field and it makes AI possible. In the setting the Dutch physicist sets the start differently. The new particle will allow for No, Yes, Both and None. It is the ‘both’ setting of the particle that changes things. It will allow for gradual assumptions and gradual stage settings. Now we will have a new field, one that (together with quantum computing) allows for an AI to grow on its data, not hindered (or at least a lot less hindered) by programmers and their programming. When these elements are there and completed to its first stage an AI becomes a possibility. Not the one that sales people say it is, but what the forefather of AI (Alan Turing) said it would be and then we will be there. IBM has the home field advantage, but until that happens it will be anyones guess who gets there first.

So enjoy your day and when you are personally hurt by an AI, don’t forget there is a programmer and its firm you could optionally sue for that part. Just a thought. 

Enjoy THIS day.

Leave a comment

Filed under IT, Law, Science

Pondering a path

It just hit me, I have no idea why, and I cannot vouch for the thought or prove any of it. I cannot say what happened. One moment I am contemplating the corrupting levels of the media, then I make a flash towards an AI presentation by Robert Downey Junior, then this happens. 

Consider the information we have in our heads, it does not matter what it is, it does not matter whose mind it is. It is information, yet the brain is a curious thing and I believe that there is path in our brains that is not really mapped, yet it is there, we merely haven’t found it yet. Perhaps it is stronger with some, perhaps the autistic have an answer, or at least some form of answer. 

These paths are not set in any normal ways, it is like our intuition. What if the definition “Intuition is a form of knowledge that appears in consciousness without obvious deliberation”, what if that is not the complete, or perhaps it is an incorrect view. What if intuition is guided, yet it is guided by the autonomous part of our brain. What if it adheres to some form of fractal approach to data? 

Consider the image. One part is actually a distorted image of paths, our normal thought processes based on available data, whatever data it might be. But the brain is taking. Larger step to make sense of it, almost like a whale has “the clicking sequences have been suggested to be individualised rhythmic sequences that communicate the identity of a single whale to other whales in its group. This clicking sequences reportedly allow the groups to coordinate foraging activities”, yet what if it is more? Almost like a multi dimensional organ? We state Physical modelling synthesis and how it is the waveform of the sound to be generated is computed using a mathematical model, yet what if that goes further than the mere approach to ADSR? When we consider attack, decay, sustain, and release in sound, we have the ability to revert any instrument to precision, what if the brain has its own form of that? Yet it will not be sound based, but some form of chemical based foundation, one that offers paths and choices but only the brain can make them and it is much faster than our own train of thoughts. Consider the image:

The black background, is our mind and the data it holds, the paths, the connections, a mere representation of what might be, but consider the amount of information we hold, over time it becomes a mess, it tends to be, so what if the brain has another system, a more fractal approach to the amount of data (the red lines and points) and it connects to all that information in other ways, it is how out intuition connects to all that data of sounds, smells, images and feels and it makes leaps, the red paths make for that, part of intuition, an unwavering set of paths that is controlled, not by us, but by the brain, its own shortcuts to all the mess we remember and that is how it gains the upper hand (at times). That is what AI do not have, at least not yet, because we haven’t been able to map sub conscious thinking for now, but the brain is chemical electrical and only alive is that system aware. I reckon that when we solve that one puzzle AI becomes a reality really fast. IBM has the hardware (Quantum computer) ad it is making strides into making shallow circuits a much larger part of it soon enough, but no matter how we slice it, no AI can self determine, not without the one part that is missing and I am representing it as red lines and dots. But it is mere speculation, so when we consider a fractal approach, my representation is inadequate and faltering, but for some reason the image broke through, I merely wonder why, perhaps it was because my mind considered the contemplation that people like Aleksander Ceferin and Gianni Infantino were swines and the members of the Suidae family took offence. When I see 

UEFA President Čeferin: ‘Spirit of solidarity’ makes football stronger than ever’, all whilst it was fear of losing income and someone told the media that they would lose billions, they all revolted, like pigs seeing their trough removed. And the media was ALL over that were they not, what a waste of space. So my mind came up with the part I wrote about and I have absolutely no scientific or any other evidence that there is ANY validity in the thoughts I was having, but this is the place where I give light to these thoughts and feel free to wave them away, I might have done the same thing, but there is something nagging in my brain and this is how it started, perhaps there is more to come, I cannot tell.

But there you have it, time to saw another log, at least there is that and I can snore the day away today. 

Leave a comment

Filed under Science

Is it real?

Yes, that is the question we all ask at times, in my case it is something my mind is working out, or at least trying to work out. The idea that my mind is forming is “Is it the image of a vision, or is it a vision of an image”, one is highly useful, the other a little less so. The mind is using all kinds of ideas to collaborate in this, as such, I wonder what is. The first is a jigsaw, consider a jigsaw, even as the image is different, the pieces are often less so different, one could argue that hundreds of jigsaws have interchangeable pieces, we merely do not consider them as the image is different and for the most, how many jigsaws have you ever owned? With this in the back of the mind what happens when we have data snippets, a data template, with several connectors, the specific id of the data and then we have the connector which indicates where the data comes from, both with date and time stamps. But like any jigsaw, what if we have hundreds of jigsaws and the pieces are interchangeable? What is the data system is a loom that holds all the data, but the loom reflects on the image of the tapestry, what happens, when we see all the looms, all the tapestries and we identify the fibres as the individual users? What happens when we create new tapestries that are founded on the users? We think it is meaning less and useless, but is it? What if data centres have the ability to make new frameworks, to stage a setting that identifies the user and their actions? We talk about doing this, we claim to make such efforts, but are we? You see, as IBM completed its first Quantum computer, and it has now a grasp on shallow circuits, the stage comes closer to having Ann actual AI in play, not the one that IT marketing claims to have, and salespeople states is in play, but an actual AI that can look into the matter, as this comes into play we will need a new foundation of data and a new setting to store and retrieve data, everything that is now is done for the convenience of revenue, a hierarchic system decades old, even if the carriers of such systems are in denial, the thinking requires us to thwart their silliness and think of the data of tomorrow, because the data of today will not suffice, no matter how blue Microsoft Italy claims it is, it just won’t do, we need tomorrows thinking cap on and we need to start considering that an actual new data system requires us to go back to square one and throw out all we have, it is the only way.

In this, we need to see data as blood cells, billions individual snippets of data, with a shell, connectors and a core. All that data in veins (computers) and it needs to be able to move from place to place. To be used by the body where the specific need is, an if bioteq goes to places we have not considered, data will move too and for now the systems are not ready, they are nowhere near ready and as such my mind was spinning in silence as it is considering a new data setup. A stage we will all need to address in the next 3-5 years, and if the energy stage evolves we need to set a different path on a few levels and there we will need a new data setup as well, it is merely part of a larger system and data is at the centre of that, as such if we want smaller systems, some might listen to Microsoft and their blue (Azure) system, but a smurf like that will only serve what Microsoft wants it to smurf, we need to look beyond that, beyond what makers consider of use, and consider what the user actually needs.

Consider an app, a really useful app when you are in real estate, there is Trulia, it is great for all the right reasons, but it made connections, as it has. So what happens when the user of this app wants another view around the apartment or house that is not defined by Yelp? What happens when we want another voice? For now we need to take a collection of steps hoping that it will show results, but in the new setting with the new snippets, there is a larger option to see a loom of connections in that location, around that place we investigate and more important, there is a lot more that Trulia envisioned, why? Because it was not their mission statement to look at sports bars, grocery stores and so on, they rely on the Yelp link and some want a local link, some want the local link that the local newspapers give. That level of freedom requires a new thinking of data, it requires a completely new form of data model and in 5G and later in 6G it will be everything, because in 4G it was ‘Wherever I am’, in 5G it will become ‘Whenever I want it, and the user always wants it now. In that place some blue data system by laundry detergent Soft with Micro just does not cut it. It needs actual nextgen data and such a system is not here yet. So if I speculate on 6G (pure speculation mind you), it will become ‘However I need it’ and when you consider that, the data systems of today and those claiming it has the data system of tomorrow, they are nowhere near ready, and that is fine. It is not their fault (optionally we can blame their board of directors), but we are looking at a new edge of technology and that is not always a clear stage, as such my mind was mulling a few things over and this is the initial setting my mind is looking at. 

So, as such we need to think what we actually need in 5 years, because if the apps we create are our future, the need to ponder what data we embrace matters whether we have any future at all.

Well, have a great easter and plenty of chocolate eggs.

Leave a comment

Filed under IT, Science

News, fake news, or else?

Yup that is the statement that I am going for today. You see, at times we cannot tell one form the other, and the news is making it happen. OK, that seems rough but it is not, and in this particular case it is not an attack on the news or the media, as I see it they are suckered into this false sense of security, mainly because the tech hype creators are prat of the problem. As I personally see it, this came to light when I saw the BBC article ‘Facebook’s Instagram ‘failed self-harm responsibilities’’, the article (at https://www.bbc.com/news/technology-55004693) was released 9 hours ago and my blinkers went red when I noticed “This warning preceded distressing images that Facebook’s AI tools did not catch”, you see, there is no AI, it is a hype, a ruse a figment of greedy industrialists and to give you more than merely my point of view, let me introduce you to ‘AI Doesn’t Actually Exist Yet’ (at https://blogs.scientificamerican.com/observations/ai-doesnt-actually-exist-yet/). Here we see some parts written by Max Simkoff and Andy Mahdavi. Here we see “They highlight a problem facing any discussion about AI: Few people agree on what it is. Working in this space, we believe all such discussions are premature. In fact, artificial intelligence for business doesn’t really exist yet”, they also go with a paraphrased version of Mark Twain “reports of AI’s birth have been greatly exaggerated, I gave my version in a few blogs before, the need for shallow circuits, the need for a powerful quantum computer, IBM have a few in development and they are far, but they are not there yet and that is merely the top of the cream, the icing on the cake. Yet these two give the goods in a more eloquent way than I ever did “Organisations are using processes that have existed for decades but have been carried out by people in longhand (such as entering information into books) or in spreadsheets. Now these same processes are being translated into code for machines to do. The machines are like player pianos, mindlessly executing actions they don’t understand”, and that is the crux, understanding and comprehension, it is required in an AI, that level of computing will not now exist, not for at least a decade. Then they give us “Some businesses today are using machine learning, though just a few. It involves a set of computational techniques that have come of age since the 2000s. With these tools, machines figure out how to improve their own results over time”, it is part of the AI, but merely part, and it seems that the wielders of the AI term are unwilling to learn, possibly because they can charge more, a setting we have never seen before, right? And after that we get “AI determines an optimal solution to a problem by using intelligence similar to that of a human being. In addition to looking for trends in data, it also takes in and combines information from other sources to come up with a logical answer”, which as I see is not wrong, but not entirely correct either (from my personal point of view), I see “an AI has the ability to correctly analyse, combine and weigh information, coming up with a logical or pragmatic solution towards the question asked”, this is important, the question asked is the larger problem, the human mind has this auto assumption mode, a computer does not, there is the old joke that an AI cannot weigh data as he does not own a scale. You think it is funny and it is, but it is the foundation of the issue. The fun part is that we saw this application by Stanley Kubrick in his version of Arthur C Clarke’s 2001: A Space Odyssey. It is the conflicting part that HAL-9000 had received, the crew was unaware of a larger stage of the process and when the stage of “resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission”, which has the unfortunate part that Astronaut Poole goes the way of the Dodo. It matters because there are levels of data that we have yet to categorise and in this the AI becomes as useful as a shovel at sea. This coincides with my hero the Cheshire Cat ‘When is a billy club like a mallet?’, the AI cannot fathom it because he does not know the Cheshire Cat, the thoughts of Lewis Carrol and the less said to the AI about Alice Kingsleigh the better, yet that also gives us the part we need to see, dimensionality, weighing data from different sources and knowing the multi usage of a specific tool.

You see a tradie knows that a monkey wrench is optionally also useful as a hammer, an AI will not comprehend this, because the data is unlikely to be there, the AI programmer is lacking knowledge and skills and the optional metrics and size of the monkey wrench are missing. All elements that a true AI can adapt to, it can weight data, it can surmise additional data and it can aggregate and dimensionalise data, automation cannot and when you see this little side quest you start to consider “I don’t think the social media companies set up their platforms to be purveyors of dangerous, harmful content but we know that they are and so there’s a responsibility at that level for the tech companies to do what they can to make sure their platforms are as safe as is possible”, as I see it, this is only part of the problem, the larger issue is that there are no actions against the poster of the materials, that is where politics fall short. This is not about freedom of speech and freedom of expression. This is a stage where (optionally with intent) people are placed in danger and the law is falling short (and has been falling short for well over a decade), until that is resolved people like Molly Russell will just have to die. If that offends you? Good! Perhaps that makes you ready to start holding the right transgressors to account. Places like Facebook might not be innocent, yet they are not the real guilty parties here, are they? Tech companies can only do so such and that failing has been seen by plenty for a long time, so why is Molly Russel dead? Yet finding the posters of this material and making sure that they are publicly put to shame is a larger need, their mommy and daddy can cry ‘foul play’ all they like, but the other parents are still left with the grief of losing Molly. I think it is time we do something actual about it and stop wasting time blaming automation for something it is not. It is not an AI, automation is a useful tool, no one denies this, but it is not some life altering reality, it really is not.

Leave a comment

Filed under IT, Law, Media, Politics, Science

About lights and tunnels

If we take the change of new technology (like 5G), we need to feel to be in charge. We tend to forget that part (I surely did at some point) and whilst I was considering a different form of new IP, I considered the small status that the thought came from a direction where my knowledge is not that great, I am no expert on technological 5G, I never claimed to be that. So when my mind grew towards a new form of mobile security towards 5G+ or even 6G, my mind set an image, yet the stage of routing, ciphering and deciphering waves are not the stages I am an expert in, yet forms of the solution come to me. I am not a mathematician, so I see images, images of clockworks, clockworks of gun cylinders and they intersect. 7, 9 and 11 shooters, cylinders of different properties are intersecting, what do you set when there are n 7 cylinders all with different time settings, n 9 cylinders and n 11 cylinders. Setting a larger stage of frequencies and cut stages that are linked, all set in an algorithm via a new form of routing, the result is a new stage of mobile communication that cannot be hacked, until true AI and true Quantum computing are a fact, the shallow circuits cannot cut through the mesh, a new stage of true privacy and at present Google and Huawei are the only ones even close to setting this up, even as they have the juice, they will need someone like Cisco to pull some of the weight. 

It would also seem a different stage to the mobile phone. I remember the old walkie talkies in the 60’s. The more advanced models had several crystals so that there was a unique signal. I wondered what we could do to emphasise on privacy in today’s mobile setting. In stead of crystals, we have a mobile phone, it is a transmitter, but what happens when it is not set to a band, but it can be set to 7,9, or 11 separate frequencies. A sort of time slice and that is the beginning, the carrier will give you the connection with the slices, their routers will set the connection and unless the hacker has the set, they can never get the entire conversation, unless they have every connection and then they would need to unscramble thousands of phones depending on the hardware whether they used 7,9 or 11 parts. If I get it to work in my mind, it could signal a new age of real privacy for people with a mobile phone.

But in the end, it is merely a sideline towards more interesting IP. The idea hit me when I was looking at a real estate site, which one does not matter. I was merely curious. It all started with a spec pal by Piers Morgan, he made a special on Monte Carlo and I was curious, as I had never been there. So as I got curious, I took a look and I noticed that speed was an interesting flaw, even on a mobile, a place where well over 50% of all searches are done, it took nearly forever. Yet when I took the Google Tester (at https://search.google.com/test/mobile-friendly) the site passed the test, it made perfect sense, yet the delay was real. I do not think it was them, or me. But it got me thinking of a different approach.
Google has had that setting for a long time, they call it the Lightbox ad. I had another use for the ad, or as I would call it, another media container. But the media container would require a different use, it would require the user to use a different approach, not that this would be bad, but it would optionally reduce the bandwidth that they use. If the app links to the toppling on the site, yet when we look, the app gets the link to the media container on the google server, the real estate data needs are not going via the offerer, it goes via the seeker and hey are either really seeking, or merely browsing, the browsers will no longer impede on the business, the seekers will not notice and these media containers can all be used for advertising all over the place, it is up to the realtor which ones are ready for advertising all over the place, and there is the larger kicker, it is a setting that (as far as I can tell) no realtor has considered and that is where the larger stage comes, because when 5G hits, the realtor will see a much larger benefit, they would not need to update (other then optionally an app), they will be ready, and they will push towards both their needs via their site, an app and via Google Ads, three directions instead of one and it will be a larger stage when no one was thinking ahead. 

There is light at the end of the tunnel, I switched on the lights, and no one cares who switched on the lights and that is OK, it is just that no one realised that the lights were not on, that should leave you with the consideration why no one realised that.

 

1 Comment

Filed under IT, Science

The Lie of AI

The UK home office has just announced plans to protect paedophiles for well over a decade and they are paying millions to make it happen. Are you offended yet? You should be. The article (at https://www.theguardian.com/technology/2019/sep/17/home-office-artificial-intelligence-ai-dark-web-child-sexual-exploitation) is giving you that, yet you do not realise that they are doing that. The first part is ‘Money will go towards testing tools including voice analysis on child abuse image database‘, the second part is “Artificial intelligence could be used to help catch paedophiles operating on the dark web, the Home Office has announced” these two are the guiding part in this, and you did not even know it. To be able to understand this there are two parts. The first is an excellent article in the Verge (at https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science), the second part is: ‘AI does not exist!

Important fact is that AI will become a reality at some point, in perhaps a decade, yet the two elements making AI essential have not been completed. The first is quantum computing, IBM is working on it, and they admit: “For problems above a certain size and complexity, we don’t have enough computational power on Earth to tackle them.” This is true enough and fair enough. They also give us: “it was only a few decades ago that quantum computing was a purely theoretical subject“. Two years ago (yes only two years ago) IBM gives us a new state, a new stage in quantum computing where we see a “necessary brick in the foundation of quantum computing. The formula stands apart because unlike Shor’s algorithm, it proves that a quantum computer can always solve certain problems in a fixed number of steps, no matter the increased input. While on a classical computer, these same problems would require an increased number of steps as the input increases” This is the first true step towards creating AI, as what you think is AI grows, the data alone creates an increased number of steps down the line, coherency and comprehension become floating and flexible terms, whilst comprehension is not flexible, comprehension is a set stage, without ‘Quantum Advantage with Shallow Circuits‘ it basically cannot exist. In addition, this year we get the IBM Q System One, the world’s first integrated quantum computing system for commercial use, we could state this is the first true innovative computer acceleration in decades and it has arrived in a first version, yet there is something missing and we get to stage two later.

Now we get to the Verge.

The State of AI in 2019‘ published in January this year gives us the goods, and it is an amazing article to read. The first truth is “the phrase “artificial intelligence” is unquestionably, undoubtedly misused, the technology is doing more than ever — for both good and bad“, the media is all about hype and the added stupidity given to us by politicians connected the worst of both worlds, they are clueless and they are trying being dumb and clueless on the worst group of people, the paedophiles and they are paying millions to do what is cannot accomplish at present.

Consider a computer or a terminator super smart, like in the movies and consider “a sci-vision of a conscious computer many times smarter than a human. Experts refer to this specific instance of AI as artificial general intelligence, and if we do ever create something like this, it’ll likely to be a long way in the future” and that is the direct situation, yet there is more.

The quote “Talk about “machine learning” rather than AI. This is a subfield of artificial intelligence, and one that encompasses pretty much all the methods having the biggest impact on the world right now (including what’s called deep learning)” is very much at the core of it all, and it exists and it is valid and it is the point of set happening, yet without quantum computing we are confronted with the earlier stage ‘on a classical computer, these same problems would require an increased number of steps as the input increases‘, so now all that data delays and delays and stops progress, this is the stage that is a direct issue, then we also need to consider “you want to create a program that can recognize cats. You could try and do this the old-fashioned way by programming in explicit rules like “cats have pointy ears” and “cats are furry.” But what would the program do when you show it a picture of a tiger? Programming in every rule needed would be time-consuming, and you’d have to define all sorts of difficult concepts along the way, like “furriness” and “pointiness.” Better to let the machine teach itself. So you give it a huge collection of cat photos, and it looks through those to find its own patterns in what it sees” This learning stage takes time, yet down the track it becomes awfully decent in recognising what a cat is and what is not a cat. That takes time, yet the difference is that we are seeking paedophiles, so that same algorithm is used not to find a cat, but to find a very specific cat. Yet we cannot tell it the colour of its pelt (because we do not know), we cannot tell the size, shape or age of that specific cat. Now you see the direct impact of how delusional the idea form the Home Office is. Indirectly we also get the larger flaw. Learning for computers comes in a direct version and an indirect version and we can both put it in the same book: Programming for Dummies! You see, we feed the computer facts, but as it is unable to distinguish true facts from false facts we see a larger failing, the computer might start to look in the wrong direction, pointing out the wrong cat, making the police chase and grab the wrong cat and when that happens, the real paedophile had already hidden itself again. Deep Learning can raise flags all over the place and it will do a lot of good, but in the end, a system like that will be horribly expensive and paying 100 police officers for 20 years to hunt paedophiles might cost the same and will yield better results.

All that is contained in the quote: “Machine learning systems can’t explain their thinking, and that means your algorithm could be performing well for the wrong reasons” more importantly it will be performing for the wrong reasons on wrong data making the learning process faulty and flawed to a larger degree.

The article ends with “In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined” which is true and more important, it is not AI, the fact that we were not really informed about, there is not AI at present, not for some time to come and it makes us wonder on the Guardian headline ‘Home Office to fund use of AI to help catch dark web paedophiles‘, how much funds and the term ‘use of AI‘ requires it to exist, which it does not.

The second missing item.

You think that I was kidding, but I was not, even as the Quantum phase is seemingly here, its upgrade does not exist yet and that is where true AI becomes an optional futuristic reality. This stage is called the Majorana particle, it is a particle that is both matter and antimatter (the ability to be both positive and negative), and one of the leading scientists in this field is Dutch Physicist Leo Kouwenhoven. Once his particle becomes a reality in quantum computing, we get a new stage of shallow circuits, we get a stage where fake news, real news, positives and false positives are treated in the same breath and the AI can distinguish between them. That stage is decades away. At that point the paedophile can create whatever paper trail he likes; the AI will be worse than the most ferocious bloodhound imaginable and will see the fake trails faster than a paedophile can create it. It will merely get the little pervert caught faster.

The problem is that this is decades away, so someone should really get some clarification from the Home Office on how AI will help, because there is no way that it will actually do so before the government budget of 2030. What will we do in the meantime and what funds were spend to get nothing done? When we see: “pledged to spend more money on the child abuse image database, which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly, against a record of 14m images, to help identify victims“, in this we also get “used to trial aspects of AI including voice analysis and age estimation to see whether they would help track down child abusers“, so when we see ‘whether they would help‘, we see a shallow case, so shallow that the article in the Verge well over half a year ago should indicate that this is all water down the drain. And the amount (according to Sajid Javid) is set to “£30m would be set aside to tackle online child sexual exploitation“, I am all for the goal and the funds. Yet when we realise that AI is not getting us anywhere and Deep Learning only gets us so far, and we also now consider “trial aspects of AI including voice analysis and age estimation” we see a much larger failing. How can voice analyses help and how is this automated? and as for the term ‘trial aspects of AI‘, something that does not exist, I wonder who did the critical read on a paper allowing for £30 million to be spend on a stage that is not relevant. How about getting 150 detectives for 5 years to hunt down these bastards might be cheaper and in the end a lot more results driven.

In the end of the article we see the larger danger that is not part of AI, when we see: “A paper by the security think-tank Rusi, which focused on predictive crime mapping and individual risk assessment, found algorithms that are trained on police data may replicate – and in some cases amplify – the existing biases inherent in the dataset“, in this Rusi is right, it is about data and the data cannot be staged or set against anything, which makes for a flaw in deep learning as well. We can teach what a cat is by showing it 1,000 images, yet how are the false images recognised (panther, leopard, or possum)? That stage seems simple in cats, in criminals it is another matter, comprehension and looking past data (showing insight and wisdom) is a far stretch for AI (when it is there) and machine learning and deeper learning are not ready to this degree at present. We are nowhere near ready and the first commercial quantum computer was only released this year. I reckon that whenever a politician uses AI as a term, he is either stupid, uninformed or he wants you to look somewhere else (avoiding actual real issues).

For now the hypes we see are more often than not the lie of AI, something that will come, but unlikely to be seen before the PS7 is setting new sales records, which is still many years away.

 

1 Comment

Filed under Finance, IT, Media, Politics, Science