When one door closes

Yes, that is the stage I find myself in. However I could say when one door closes someone gets to open the window. Yet, even as I am eager to give you that story now, I will await the outcome of Twitter (who blocked my account) and the outcome there will support the article. Which is nice because it makes for an entertaining story. It did however make me wonder on a few parts. You see AI does not exist. It is machine learning and deeper learning and that is an issue for the following reasons.

Deep learning requires large amounts of data. Furthermore, the more powerful and accurate models will need more parameters, which, in turn, require more data. Once trained, deep learning models become inflexible and cannot handle multitasking.

This leads to: 

Massive Data Requirement. As deep learning systems learn gradually, massive volumes of data are necessary to train them. This gives us a rather large setting, as people are more complex, it will require more data to train them and the educational result is as many say an inflexible setting. I personally blame the absence of shallow circuits, but what do I know? There is also the larger issue of paraphrasing. There is an old joke. The joke goes “Why can a program like SAP never succeed?” “Because it is about a stupid person with stress, anxiety and pain” until someone teaches that system that SAP is also a medical term for Stress, Anxiety and Pain” and until we understand that ‘sap’ in the urban dictionary as a stupid person, or a foolish and gullible person the joke falls flat. 

And that gets me to my setting (I could not wait that long). The actor John Barrowman hinted that he will be in the new Game of Thrones series (House of the Dragon), he did this by showing an image of the flag of House Stark. 

I could not resist and asked him whether we will see his head on a pike and THAT got thrown from Twitter (or taken from the throne of Twitter). Yet ANYONE who followed Game of Thrones will know that Sean Bean’s head was placed on a pike at the end of season 1, as such I thought it was funny and when you think if it, it is. But that got me banned. So was this John Barrowman who felt threatened? I doubt that, but I cannot tell because the reason of why this tweet caused the block is currently unknown. If it is machine learning and deeper learning we see its failure. Putting ones head on a pike could be threatening behaviour, but it came from a previous tweet and the investigator didn’t get it, the system didn’t get it or the actor didn’t do his homework. I leave it up to you to figure it out. Optionally my sense of humour sucks, that to is an option. But if you see the emoji’s after the text you could figure it out. 

High Processing Power. Another issue with deep learning is that it demands a lot of computational power. This is another side. With each iteration of data the demand increases. If you did statistics in the 90’s you would know that CLUSTER analyses had a few setbacks, the memory needs being one of them, it resulted in the creation of QUICKCLUSTER something that could manage a lot more data. So why use the cluster example?

Cluster analyses is a way of grouping cases of data based on the similarity of responses to several variables. There are two types of measure: similarity coefficients and dissimilarity coefficients. And especially in the old days, memory was hard to get and it needs to be done in memory. And here we see the first issue. ‘the similarity of responses to several variables’ and here we determine the variables of response. But in the SAP example, the response is depending on someone with medical knowledge and one with urban knowledge of English, and if these are two different people, the joke quickly falls flat, especially when these two elements do not exchange information. In my example of John Barrowman WE ALL assume that he does his homework (he has done this in so many instances, so why not now), so we are willing to blame the algorithm, but did that algorithm see the image John Barrowman gave us all, does the algorithm know the ins and outs of Game of Thrones? All elements and I would jest (yes, I cannot stop) that these are all elements of dissimilarity, as such 50% of the cluster fails right of the bat and that gets us to…

Struggles With Real-Life Data. Yes, deeper learning struggles with real life data because it is given in the width of the field of observation. For example, if we were to ask a plumber, a butcher and a veterinarian to describe the uterus of any animal we get three very different answers and there is every chance that the three people do not understand the explanation of the other two. A real life example of real life settings and that is before paraphrasing comes into play, it merely makes the water a lot more muddy.

Black Box Problems. And here the plot thickens. You see at the most basic level, “black box” just means that, for deep neural networks, we don’t know how all the individual neurons work together to arrive at the final output. A lot of times it isn’t even clear what any particular neuron is doing on its own. Now I tend to call this: “A precise form of fuzzy logic” and I could be wrong on many counts, but that is how I see it. You see why did deeper learning learn it like this? It is an answer we will not ever get. It becomes too complex and now consider “a black box exists due to bizarre decisions made by intermediate neurons on the way to making the network’s final decision. It’s not just complex, high-dimensional non-linear mathematics; the black box is intrinsically due to non-intuitive intermediate decisions.” There is no right, no wrong. It is how it is and that is how I see what I now face, the person or system just doesn’t get it for whatever reason and a real AI could have seen a few more angles and as it grows it will see all the angles and get the right conclusion faster and faster. A system on machine learning or deeper learning will never get it, it will get more and more wrong because it is adjusted by a person and if that person misses the point the system will miss the point too, like a place like Gamespot, all flawed because a conclusion came based on flawed information. This is why we have no AI, because the elements of shallow circuits and quantum computing are still in their infancy. But salespeople do not care, the term AI sells and they need sales. This is why things go wrong, no one will muzzle the salespeople.

In the end shit happens, that is the setting but the truth of the matter is that too many people embrace AI, a technology that does not exist, they call it AI, but it is a fraction of AI and as such it is flawed, but that s a side they do not want to hear. It is a technology in development. This is what you get when the ‘fake it until you make it’ is in charge. A flaw that evolves into a larger flaw until that system buckles.

But it gave me something to write about, so it is not all a loss, merely that my Twitter peeps will have to do without me for a little while. 

Advertisement

Leave a comment

Filed under IT, movies, Science

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.