Tag Archives: Machine learning

When one door closes

Yes, that is the stage I find myself in. However I could say when one door closes someone gets to open the window. Yet, even as I am eager to give you that story now, I will await the outcome of Twitter (who blocked my account) and the outcome there will support the article. Which is nice because it makes for an entertaining story. It did however make me wonder on a few parts. You see AI does not exist. It is machine learning and deeper learning and that is an issue for the following reasons.

Deep learning requires large amounts of data. Furthermore, the more powerful and accurate models will need more parameters, which, in turn, require more data. Once trained, deep learning models become inflexible and cannot handle multitasking.

This leads to: 

Massive Data Requirement. As deep learning systems learn gradually, massive volumes of data are necessary to train them. This gives us a rather large setting, as people are more complex, it will require more data to train them and the educational result is as many say an inflexible setting. I personally blame the absence of shallow circuits, but what do I know? There is also the larger issue of paraphrasing. There is an old joke. The joke goes “Why can a program like SAP never succeed?” “Because it is about a stupid person with stress, anxiety and pain” until someone teaches that system that SAP is also a medical term for Stress, Anxiety and Pain” and until we understand that ‘sap’ in the urban dictionary as a stupid person, or a foolish and gullible person the joke falls flat. 

And that gets me to my setting (I could not wait that long). The actor John Barrowman hinted that he will be in the new Game of Thrones series (House of the Dragon), he did this by showing an image of the flag of House Stark. 

I could not resist and asked him whether we will see his head on a pike and THAT got thrown from Twitter (or taken from the throne of Twitter). Yet ANYONE who followed Game of Thrones will know that Sean Bean’s head was placed on a pike at the end of season 1, as such I thought it was funny and when you think if it, it is. But that got me banned. So was this John Barrowman who felt threatened? I doubt that, but I cannot tell because the reason of why this tweet caused the block is currently unknown. If it is machine learning and deeper learning we see its failure. Putting ones head on a pike could be threatening behaviour, but it came from a previous tweet and the investigator didn’t get it, the system didn’t get it or the actor didn’t do his homework. I leave it up to you to figure it out. Optionally my sense of humour sucks, that to is an option. But if you see the emoji’s after the text you could figure it out. 

High Processing Power. Another issue with deep learning is that it demands a lot of computational power. This is another side. With each iteration of data the demand increases. If you did statistics in the 90’s you would know that CLUSTER analyses had a few setbacks, the memory needs being one of them, it resulted in the creation of QUICKCLUSTER something that could manage a lot more data. So why use the cluster example?

Cluster analyses is a way of grouping cases of data based on the similarity of responses to several variables. There are two types of measure: similarity coefficients and dissimilarity coefficients. And especially in the old days, memory was hard to get and it needs to be done in memory. And here we see the first issue. ‘the similarity of responses to several variables’ and here we determine the variables of response. But in the SAP example, the response is depending on someone with medical knowledge and one with urban knowledge of English, and if these are two different people, the joke quickly falls flat, especially when these two elements do not exchange information. In my example of John Barrowman WE ALL assume that he does his homework (he has done this in so many instances, so why not now), so we are willing to blame the algorithm, but did that algorithm see the image John Barrowman gave us all, does the algorithm know the ins and outs of Game of Thrones? All elements and I would jest (yes, I cannot stop) that these are all elements of dissimilarity, as such 50% of the cluster fails right of the bat and that gets us to…

Struggles With Real-Life Data. Yes, deeper learning struggles with real life data because it is given in the width of the field of observation. For example, if we were to ask a plumber, a butcher and a veterinarian to describe the uterus of any animal we get three very different answers and there is every chance that the three people do not understand the explanation of the other two. A real life example of real life settings and that is before paraphrasing comes into play, it merely makes the water a lot more muddy.

Black Box Problems. And here the plot thickens. You see at the most basic level, “black box” just means that, for deep neural networks, we don’t know how all the individual neurons work together to arrive at the final output. A lot of times it isn’t even clear what any particular neuron is doing on its own. Now I tend to call this: “A precise form of fuzzy logic” and I could be wrong on many counts, but that is how I see it. You see why did deeper learning learn it like this? It is an answer we will not ever get. It becomes too complex and now consider “a black box exists due to bizarre decisions made by intermediate neurons on the way to making the network’s final decision. It’s not just complex, high-dimensional non-linear mathematics; the black box is intrinsically due to non-intuitive intermediate decisions.” There is no right, no wrong. It is how it is and that is how I see what I now face, the person or system just doesn’t get it for whatever reason and a real AI could have seen a few more angles and as it grows it will see all the angles and get the right conclusion faster and faster. A system on machine learning or deeper learning will never get it, it will get more and more wrong because it is adjusted by a person and if that person misses the point the system will miss the point too, like a place like Gamespot, all flawed because a conclusion came based on flawed information. This is why we have no AI, because the elements of shallow circuits and quantum computing are still in their infancy. But salespeople do not care, the term AI sells and they need sales. This is why things go wrong, no one will muzzle the salespeople.

In the end shit happens, that is the setting but the truth of the matter is that too many people embrace AI, a technology that does not exist, they call it AI, but it is a fraction of AI and as such it is flawed, but that s a side they do not want to hear. It is a technology in development. This is what you get when the ‘fake it until you make it’ is in charge. A flaw that evolves into a larger flaw until that system buckles.

But it gave me something to write about, so it is not all a loss, merely that my Twitter peeps will have to do without me for a little while. 

Leave a comment

Filed under IT, movies, Science

Altering Image

This happens, sometimes it is within ones self that change is pushed, in other cases it is outside information or interference. In my case it is outside information. Now, let’s be clear. This is based on personal feelings, apart from the article not a lot is set in papers. But it is also in part my experience with data and thee is a hidden flaw. There is a lot of media that I do not trust and I have always been clear about that. So you might have issues with this article.

It all started when I saw yesterday’s article called ‘‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives’ (at https://www.theguardian.com/technology/2022/aug/07/ai-eu-moves-to-beat-the-algorithms-that-ruin-lives). There we see: “David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.” In this my first question becomes ‘Based on what data?’ You see Apple is (in part) greed driven, as such if she has a credit history and a good credit score, she would get the same credit. But the article gives us nothing of that, it goes quickly towards “artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”” You see, the very first issue is that AI does not (yet) exist. We might see all the people scream AI, but there is no such thing as AI, not yet. There is machine learning, there is deeper machine learning and they are AWESOME! But the algorithm is not AI, it is a human equation, made by people, supported by predictive analytics (another program in place) and that too is made by people. Lets be clear, this predictive analytics c an be as good as it is, but it relies on data it has access to. To give a simple example. In that same example in a place like Saudi Arabia, Scandinavians would be discriminated against as well, no matter what gender. The reason? The Saudi system will not have the data on Scandinavians compared to Saudi’s requesting the same options. It all requires data and that too is under scrutiny, especially in the era 1998-2015, too much data was missing on gender, race, religion and a few other matters. You might state that this is unfair, but remember, it comes from programs made by people addressing the needs of bosses in Fintech. So a lot will not add up ad whilst everyone screams AI, these bosses laugh, because there is no AI. And the sentence “While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries” does not help. What legal setting was in play? What was submitted to the court? What decided on “violating fair lending rules last year”? No one has any clear answers and they are not addressed in this article either. So when we get to “Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”” We have two defining problems. In the first, there is no AI. In the second “AI models can only learn from historical data they have been fed” I believe that there is a much bigger problem. There is a stage of predictive analytics, and there is a setting of (deeper) machine learning and they both need data, that part if correct, no data, no predictions. But how did I get there?

That is seen in the image above. I did not make it, I found it and it shows a lot more clearly what is in play. In most Fintech cases it is all about the Sage (funny moment). Predictive inference, Explanatory inference, and decision making. A lot of it is covered in machine learning, but it goes deeper. The black elements as well as control and manipulation (blue) are connected. You see an actual AI can combine predictive analytics and extrapolation, and do that for each category (races, gender, religion) all elements that make the setting, but data is still a part of that trajectory and until shallow circuits are more perfect than they are now (due to the Ypsilon particle I believe). You see a Dutch physicist found the Ypsilon particle (if I word this correctly) it changes our binary system into something more. These particles can be nought, zero, one or both and that setting is not ready, it allows the interactions to a much better process that will lead to an actual AI, when the IBM quantum systems get these two parts in order they become true quantum behemoth and they are on track, but it is a decade away. It does not hurt to set a larger AI setting sooner rather than too late, but at present it is founded on a lot of faulty assumptions. And it might be me, but look around on all these people throwing AI around. What is actual AI? And perhaps it is also me, the image I showed you is optionally inaccurate and lacks certain parts, I accept that, but it drives me insane when we see more and more AI talk whilst it does not exist. I saw one decent example “For example, to master a relatively simple computer game, which could take an average person 15 minutes to learn, AI systems need up to 924 hours. As for adaptability, if just one rule is altered, the AI system has to learn the entire game from scratch” this time is not learning, it is basically staging EVERY MOVE in that game, like learning chess, we learn the rules, the so called AI will learn all 10(111) and 10(123) positions (including illegal moves) in Chess. A computer can remember them all, but if one move was incorrectly programmed (like the night), the program needs to relearn all the moves from start. When the Ypsilon particle and shallow circuits are added the equation changes a lot. But that time is not now, not for at least a decade (speculated time). So in all this the AI gets blamed for predictive analytics and machine learning and that is where the problem starts, the equation was never correct or fair and the human element in all this is ‘ignored’ because we see the label AI, but the programmer is part of the problem and that is a larger setting than we realise. 

Merely my view on the setting.

 

Leave a comment

Filed under Finance, IT, Media, Science

IP intoxication

Yup, this just happened to me. I will try to be as clear as possible, yet I cannot say too much. It all started as I was contemplating new RPG IP, not entirely new, it was to be added to the RPG game that I have been giving visibility to on this blog. As I was considering parts in the economy to interact with the play world my thoughts skipped to Brendan Fraser. I was rethinking some parts of Encino Man (with Sean Astin aka Rudy), as well as The Mummy (with Rachel Weisz as Evelyn Carnahan). At some point the mint was drawing a line and even as additional IP came to mind, I ignored it as this would be Ubisoft territory. But the line became and as such my mind saw an interaction that has NEVER EVER been done in RPG gaming before. It would be optionally the stage for Sony, but it seems that streamers (Amazon Luna) had a much better grasp of the option. To get this added in a game would imply that the game would require module of machine learning and deeper learning. Now that is not so odd. A multitude of RPG games have some kind of NPC AI in play (to coin a phrase), but to add this to the character as a side setting has to my knowledge never been done before and the added options would give it more traction towards gamers. There are a few more sides, I discussed that in part in ‘Mummy and Daddy’ (at https://lawlordtobe.com/2021/03/19/mummy-and-daddy/), so well over a year ago (March 19th 2021). There I made mention of “it is basically, to some degree the end of the linear quest person, it is a stage never seen before and I believe that whomever makes that game to the degree we see will make that developer a nice future stage as a new larger development house, and as Micro$oft learns that they lost out again, perhaps they will take the word of gamers against that of business analyst claiming to be gamers.” Additional sides that connect and in this not only has it never been done before, it seems that whomever adds this to their RPG will have additional sides that Bethesda (the company that Microsoft paid $7,500,000,000 for) comes up short on. It feels intoxicating. To have several options in a game that none of the others ever did or contemplated. And now I see that there is more to it all. There is in part a side that touches towards IP Bundle 3 I have, something that could bring Amazon billions (but with a small amount of risk). Yet I never considered it as a side of a game, well to some degree. So as the mind is connecting idea to idea, evolve IP into IP+ and a multitude of IP’s I merely wonder why the others (Google and Amazon) are not on this page already. Google seems to driven to advertise its nest security, Amazon is doing whatever (clothing stores and trying to buy EA), but as I watch the news, and the deeper news that the news will not give us, I see an absence of true innovation in games. In a sense I wonder what is wrong with me, you see I have never been this ahead of any envelope before. 

I tried to explain it in the past. You see there is a side where gaming is, most games are in that ‘light’ circle and the bar is set to the edge. Now there is an area outside the gaming area and that is the area of what is possible, this is where innovation is. the really good games (like Horizon: Forbidden West) are in part there, and they are not alone. The real AAA games are in part there, they are coding there now because it is what will be possible tomorrow, the darker circle is what future games will see as ‘current technology’ that is how games have evolved and that has not changed. I went a step beyond that, I went where tomorrow games are currently not and I set out a slice of gaming heaven and decided to add this to the upcoming technology. There are two dangers. The first is that it has a danger of being delusional. The second is that not all technology can get there. The second one is simple. I see the streamers as a stepping stone to what will be possible in for example the PlayStation 6. A (for the lack of a better term) a hybrid streamer. A fat client client/server application in gaming. One that needs a real power player, but that is not possible UNTIL there is a national deployed 5G network. I believe Amazon Luna and Google Stadia need to get to that point, it is what is required in the evolution of gaming. So there are these two dangers, but is the first danger mine? I do not believe that to be as my mind can clearly see the parts required, but that is the hidden danger of a delusional mind. In my defence I have been involved in gaming since 1984 (connections to Mirrorsoft and Virgin Interactive Entertainment, Virgin Games at the time), so I have been around since the very beginning of games. My mind has seen a mountain of true innovator and innovations. As such I feel I am awake and on top of it. But the hidden trap is there and as such, one can never stop to question your own abilities to avoid falling into the first trap.

But for now I feel intoxicated, and not a drop of alcohol in me, innovation can be that overwhelming. This is why the previous article remains under construction. It has a lot to do with Texas, the ATF and the NRA. I wrote about that before as well and interesting enough the media seems to avoid that side to a much larger degree, with the one or two exceptions I mentioned in a previous article. I wonder why that is. Do you not?  Well time to sign off, snore like a sawmill and get ready for the new day which is already here.

Leave a comment

Filed under Gaming, IT, Science

Looky looky

It is always nice to go to bed, listen to music and dream away. That is until this flipping brain of mine gets a new idea. In this case it is not new IP, but a new setting for a group of people. You see, during lockdown I got hooked on walk video’s. It was a way to see places I had never visited before, it is one way to get around and weirdly enough, these walk videos are cool. You see more than you usually do (especially in London) most of them are actually quite good, a few need tinkering (like music not so loud) but for the most they are a decent experience. Then I thought what if GoPro makes a change, offering a new stage. That got me going, you see, most walks are on a stick, decent but intense for the filming party. So we can set the movie from a shoulder mount, a chest mount, or helmet mount. Yet what is filmed? So what happens if we have something like Google glasses and the left (or right) eye shows what we see in the film. We get all kind of degrees of filming. And if we want to ignore it, we merely close that eye for a moment. I am surprised that GoPro had not considered it, or perhaps they did. Consider that the filmer now has BOTH hands free and can hold something towards the camera, the filming agent can do more and move more freely. Consider that is works with a holder, but there is a need (in many cases) to have both hands available. And perhaps there is a need for both, the need to use one hand for precision and a gooseneck mount to keep both hands free. The interesting part is that there is no setting to get the image on something like Google Glasses and that is a shame, was I the first to think of it? It seems weird with all the city walks out there on YouTube, but there you have it and in that light, I was considering revisiting the IP I had for a next Watchdogs, one with a difference (every Ip creator will tell you that part), but I reckon that is a stage we will visit again soon enough, it involves Google Glasses and another setting that I will revisit. Just like the stage of combining deeper machine learning to a lens (or google glasses), a camera lens that offer direct translations, and the fun part is we can select if that is pushed through to film, or merely seen by us, now consider filming in Japan with machine learning and deeper machine learning auto translating ANY sign it sees. Languages that we do not know will no longer stop us, it will tell the filmmaker where they are and consider linking that to one lens in google glasses that overlays the map? It that out yet? I never saw it and there are all kinds of needs for that part. What you see is what you know, if you know the language. Just a thought at 01:17. I need a hobby, I really do!

Leave a comment

Filed under IT, Media, Science

Lying through Hypes

I was thinking on a Huawei claim that I saw (in the image), the headline ‘AI’s growing influence on the economy’ sounds nice, yet AI does not exist at present,not True AI, or perhaps better stated Real AI. At the very least two elements of AI are missing so that whatever it is, it is not AI. is that an indication on just how bad the economy is? Well, that is up for debate, but what is more adamant is what the industry is proclaiming is AI and cashing in on something that is not AI at all.

Yet when we look at the media, we are almost literally thrown to death with AI statements. So what is going on? Am I wrong?

No! 

Or at least that is my take on the matter, I believe that we are getting close to near AI, but what the hype and what marketing proclaim is AI, is not AI. You see, if there was real AI we would not see articles like ‘This AI is a perpetual loser at Othello, and players love it‘, we are handed “The free game, aptly called “The weakest AI Othello,” was released four months ago and has faced off against more than 400,000 humans, racking up a paltry 4,000 wins and staggering 1.29 million losses as of late November” this is weird, as we look at SAS (a data firm) we see: “Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks“, which is an actual part of an actual AI, so why do we see the earlier mentioned 400,000 players with 1.29 million wins whilst the system merely won 4,000 times shows that it is not learning, as such is cannot be an AI. A slightly altered SAS statement would be “Most AI examples rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data” The SAS page (at https://www.sas.com/en_au/insights/analytics/what-is-artificial-intelligence.html) also gives us the image where they state that today AI is seen as ‘Deep Learning’, which is not the same.

It is fraught with a dangerous situation, the so called AI is depending on human programming and cannot really learn, merely adapt to programming. SAS itself actually acknowledges this with the statement “Quick, watch this video to understand the relationship between AI and machine learning. You’ll see how these two technologies work, with examples” they are optionally two sides of a coin, but not the same coin, if that makes sense, so in that view the statement of Huawei makes no sense at all, how can an option influence an economy when it does not exist? Well, we could hide behind the lack of growth because it does not exist. Yet that is also the stage that planes are finding themselves in as they are not equipped with advanced fusion drives, it comes down to the same problem (one element is most likely on Jupiter and the other one is not in our solar system). When we realise that we can seek advanced fusion as much as we want, but the elements requiring that are not in our grasp, just like AI, it is shy a few elements so whatever we call AI is merely something that is not really AI. It is cheap marketing for a generation that did not look beyond the term. 

The Verge (a https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science) had a nice summary, I particularly liked (slightly altered) “the Oral-B’s Genius X toothbrush that touted supposed “AI” abilities. But dig past the top line of the press release, and all this means is that it gives pretty simple feedback about whether you’re brushing your teeth for the right amount of time and in the right places. There are some clever sensors involved to work out where in your mouth the brush is, but calling it artificial intelligence is gibberish, nothing more“, we can see this as the misuse of the term AI, and we are handed thousands of terms every day that misuse AI, most of it via short messages on Social Media. and a few lines later we see the Verge giving us “It’s better, then, to talk about “machine learning” rather than AI” and it is followed by perhaps one of the most brilliant statements “Machine learning systems can’t explain their thinking“, it is perhaps the clearest night versus day issue that any AI system would face and all these AI systems that are dependable growing any economy aren’t and the world (more likely the greed driven entities) cannot grow any direction in this. they are all hindered what marketing states it needs to be whilst marketing is clueless on what they face, or perhaps they are hoping that the people remain clueless on what they present.

So as the verge ends with “In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined” we see the nucleus of the matter, we are not asking questions and we are all accepting what the media and its connected marketing outlets are giving us, and when we make the noticeable jump that there is no AI and it is merely Machine learning and deeper learning, whilst we entertain the Verge examples “How clever is a book?” and “What expertise is encoded in a frying pan?

We need to think things through (the current proclaimed AI systems certainly won’t). We are back in the 90’s where concept sellers are trying to fill their pockets all whilst we all perfectly well know (through applied common sense) that what they are selling is a concept and no concept will fuel an economy that is a truth that came and stood up when a certain Barnum had its circus and hid behind well chosen marketing. So whenever you get some implementation of AI on LinkedIn of Facebook you are being lied to (basically you are marketed) or pushed into some direction that such articles attempt to push you in. 

That is merely my view on the matter and you are very welcome to get your own view on the matter as well, I merely hope that you will look at the right academic papers to show you what is real and what is the figment of someone’s imagination. 

 

Leave a comment

Filed under IT, Media, Science

The Lie of AI

The UK home office has just announced plans to protect paedophiles for well over a decade and they are paying millions to make it happen. Are you offended yet? You should be. The article (at https://www.theguardian.com/technology/2019/sep/17/home-office-artificial-intelligence-ai-dark-web-child-sexual-exploitation) is giving you that, yet you do not realise that they are doing that. The first part is ‘Money will go towards testing tools including voice analysis on child abuse image database‘, the second part is “Artificial intelligence could be used to help catch paedophiles operating on the dark web, the Home Office has announced” these two are the guiding part in this, and you did not even know it. To be able to understand this there are two parts. The first is an excellent article in the Verge (at https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science), the second part is: ‘AI does not exist!

Important fact is that AI will become a reality at some point, in perhaps a decade, yet the two elements making AI essential have not been completed. The first is quantum computing, IBM is working on it, and they admit: “For problems above a certain size and complexity, we don’t have enough computational power on Earth to tackle them.” This is true enough and fair enough. They also give us: “it was only a few decades ago that quantum computing was a purely theoretical subject“. Two years ago (yes only two years ago) IBM gives us a new state, a new stage in quantum computing where we see a “necessary brick in the foundation of quantum computing. The formula stands apart because unlike Shor’s algorithm, it proves that a quantum computer can always solve certain problems in a fixed number of steps, no matter the increased input. While on a classical computer, these same problems would require an increased number of steps as the input increases” This is the first true step towards creating AI, as what you think is AI grows, the data alone creates an increased number of steps down the line, coherency and comprehension become floating and flexible terms, whilst comprehension is not flexible, comprehension is a set stage, without ‘Quantum Advantage with Shallow Circuits‘ it basically cannot exist. In addition, this year we get the IBM Q System One, the world’s first integrated quantum computing system for commercial use, we could state this is the first true innovative computer acceleration in decades and it has arrived in a first version, yet there is something missing and we get to stage two later.

Now we get to the Verge.

The State of AI in 2019‘ published in January this year gives us the goods, and it is an amazing article to read. The first truth is “the phrase “artificial intelligence” is unquestionably, undoubtedly misused, the technology is doing more than ever — for both good and bad“, the media is all about hype and the added stupidity given to us by politicians connected the worst of both worlds, they are clueless and they are trying being dumb and clueless on the worst group of people, the paedophiles and they are paying millions to do what is cannot accomplish at present.

Consider a computer or a terminator super smart, like in the movies and consider “a sci-vision of a conscious computer many times smarter than a human. Experts refer to this specific instance of AI as artificial general intelligence, and if we do ever create something like this, it’ll likely to be a long way in the future” and that is the direct situation, yet there is more.

The quote “Talk about “machine learning” rather than AI. This is a subfield of artificial intelligence, and one that encompasses pretty much all the methods having the biggest impact on the world right now (including what’s called deep learning)” is very much at the core of it all, and it exists and it is valid and it is the point of set happening, yet without quantum computing we are confronted with the earlier stage ‘on a classical computer, these same problems would require an increased number of steps as the input increases‘, so now all that data delays and delays and stops progress, this is the stage that is a direct issue, then we also need to consider “you want to create a program that can recognize cats. You could try and do this the old-fashioned way by programming in explicit rules like “cats have pointy ears” and “cats are furry.” But what would the program do when you show it a picture of a tiger? Programming in every rule needed would be time-consuming, and you’d have to define all sorts of difficult concepts along the way, like “furriness” and “pointiness.” Better to let the machine teach itself. So you give it a huge collection of cat photos, and it looks through those to find its own patterns in what it sees” This learning stage takes time, yet down the track it becomes awfully decent in recognising what a cat is and what is not a cat. That takes time, yet the difference is that we are seeking paedophiles, so that same algorithm is used not to find a cat, but to find a very specific cat. Yet we cannot tell it the colour of its pelt (because we do not know), we cannot tell the size, shape or age of that specific cat. Now you see the direct impact of how delusional the idea form the Home Office is. Indirectly we also get the larger flaw. Learning for computers comes in a direct version and an indirect version and we can both put it in the same book: Programming for Dummies! You see, we feed the computer facts, but as it is unable to distinguish true facts from false facts we see a larger failing, the computer might start to look in the wrong direction, pointing out the wrong cat, making the police chase and grab the wrong cat and when that happens, the real paedophile had already hidden itself again. Deep Learning can raise flags all over the place and it will do a lot of good, but in the end, a system like that will be horribly expensive and paying 100 police officers for 20 years to hunt paedophiles might cost the same and will yield better results.

All that is contained in the quote: “Machine learning systems can’t explain their thinking, and that means your algorithm could be performing well for the wrong reasons” more importantly it will be performing for the wrong reasons on wrong data making the learning process faulty and flawed to a larger degree.

The article ends with “In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined” which is true and more important, it is not AI, the fact that we were not really informed about, there is not AI at present, not for some time to come and it makes us wonder on the Guardian headline ‘Home Office to fund use of AI to help catch dark web paedophiles‘, how much funds and the term ‘use of AI‘ requires it to exist, which it does not.

The second missing item.

You think that I was kidding, but I was not, even as the Quantum phase is seemingly here, its upgrade does not exist yet and that is where true AI becomes an optional futuristic reality. This stage is called the Majorana particle, it is a particle that is both matter and antimatter (the ability to be both positive and negative), and one of the leading scientists in this field is Dutch Physicist Leo Kouwenhoven. Once his particle becomes a reality in quantum computing, we get a new stage of shallow circuits, we get a stage where fake news, real news, positives and false positives are treated in the same breath and the AI can distinguish between them. That stage is decades away. At that point the paedophile can create whatever paper trail he likes; the AI will be worse than the most ferocious bloodhound imaginable and will see the fake trails faster than a paedophile can create it. It will merely get the little pervert caught faster.

The problem is that this is decades away, so someone should really get some clarification from the Home Office on how AI will help, because there is no way that it will actually do so before the government budget of 2030. What will we do in the meantime and what funds were spend to get nothing done? When we see: “pledged to spend more money on the child abuse image database, which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly, against a record of 14m images, to help identify victims“, in this we also get “used to trial aspects of AI including voice analysis and age estimation to see whether they would help track down child abusers“, so when we see ‘whether they would help‘, we see a shallow case, so shallow that the article in the Verge well over half a year ago should indicate that this is all water down the drain. And the amount (according to Sajid Javid) is set to “£30m would be set aside to tackle online child sexual exploitation“, I am all for the goal and the funds. Yet when we realise that AI is not getting us anywhere and Deep Learning only gets us so far, and we also now consider “trial aspects of AI including voice analysis and age estimation” we see a much larger failing. How can voice analyses help and how is this automated? and as for the term ‘trial aspects of AI‘, something that does not exist, I wonder who did the critical read on a paper allowing for £30 million to be spend on a stage that is not relevant. How about getting 150 detectives for 5 years to hunt down these bastards might be cheaper and in the end a lot more results driven.

In the end of the article we see the larger danger that is not part of AI, when we see: “A paper by the security think-tank Rusi, which focused on predictive crime mapping and individual risk assessment, found algorithms that are trained on police data may replicate – and in some cases amplify – the existing biases inherent in the dataset“, in this Rusi is right, it is about data and the data cannot be staged or set against anything, which makes for a flaw in deep learning as well. We can teach what a cat is by showing it 1,000 images, yet how are the false images recognised (panther, leopard, or possum)? That stage seems simple in cats, in criminals it is another matter, comprehension and looking past data (showing insight and wisdom) is a far stretch for AI (when it is there) and machine learning and deeper learning are not ready to this degree at present. We are nowhere near ready and the first commercial quantum computer was only released this year. I reckon that whenever a politician uses AI as a term, he is either stupid, uninformed or he wants you to look somewhere else (avoiding actual real issues).

For now the hypes we see are more often than not the lie of AI, something that will come, but unlikely to be seen before the PS7 is setting new sales records, which is still many years away.

 

1 Comment

Filed under Finance, IT, Media, Politics, Science

Iranian puppets

Saudi Arabia has been under attack for a while, yet the latest one has been the hardest hit for now. 26 people were injured in a drone attack on Abha Airport. The fact that it is 107 Km away from the border gives rise that this is not the end. Even as we see: “a late-night cruise missile attack by Houthi rebel fighters”, I wonder if they were really Houthi or members of Hezbollah calling themselves Houthi. In addition, when we see: “the missile directed at the airport had been supplied by Iran, even claiming Iranian experts were present at the missile’s launch” as the Saudi government stated this, I am not 100% convinced. The supply yes, the presence is another matter. There is pretty hard evidence that Iran has been supplying drone technology to Lebanon and they have been training Hezbollah forces. I think this is a first of several operations where we see Hezbollah paying the invoice from Iran by being operationally active as a proxy for Iran. It does not make Iran innocence, it does change the picture. the claim by Washington “Iran is directing the increasingly sophisticated Houthi attacks deep into Saudi territory” is more accurate as I see it. It changes the premise as well as the actions required. From my point of view, we merely need to be able to strike at one team, if anyone is found to be Lebanese, Saudi Arabia can change the premise by using Hezbollah goods and strike Beirut – Rafic Hariri International Airport with alternative hardware. Lebanon stops being the least volatile country in the Middle East and it would stop commerce and a few other options at the same time. I wonder how much support they get from Iran at that point. I believe in the old operational premise to victory

Segregation, isolation, and assassination, the tactical premise in three parts that is nice and all solving; It can be directed at a person, a location, or even an infrastructure, the premise matters. It is time to stop Hezbollah, that part is essential as it does more than merely slow down Houthi rebels, it pushes for Iran to go all in whilst being the visible transgressor, or it forces them to back off completely; that is how I personally see it.

So as we see the Pentagon rally behind diplomatic forces, I cannot help but wonder how it is possible for 15 dicks to be pussies? For the non-insiders, it is comprised of the 7 joint chiefs of staff, the septet of intelligence (Army, Navy, Air force, Marine, FBI, CIA and NSA) and of course the National Security Advisor. It is time to change the premise, it really is. It is also a must to proclaim ourselves to either the Kingdom of Saudi Arabia, or Iran and I will never proclaim myself towards Iran (a man must keep some principles).

We can be all angry find a solution to erase them. As I see it, my version is more productive in the end. They are targeting close to the border as much as possible, this implies that their hardware has limitations. Even so to merely rely on anti-drone and some version of an Aveillant system is economically not too viable, it will merely make some places (like airports more secure). When we look around we see that there are 6 ways to take care of drones.

  1. Guns, which requires precision and manpower
  2. Nets, same as the first, yet a net covers an area better chance of results and a chance to get the drone decently unharmed, or retrieve enough evidence to consider a counter offensive
  3. Jammer, a two pronged option, as the connection fails most drones go back to their point of origin giving the option of finding out who was behind it.
  4. Hacking, a drone can be used for hacking, but the other way is also an option if the drone lacks certain security measures, optionally getting access to logs and other information
  5. Birds of Prey (Eagle, Falcon), A Dutch solution to use a bird of prey to hunt a drone, an Eagle will be 10 times more deadly than a drone, Eagles are a lot more agile and remaining as fast all the time.
  6. Drones, Fighting drones with drones is not the most viable one, however these drones have paint guns which would hinder rotor function and speed, forcing gravity and drag to be the main issues for the drone.

The issue is not merely how to do it, but the specifics of the drone become a larger issue. An Eagle and most solutions will not work against the MQ-9 Reaper drone (to name but an example), yet Hezbollah and Iran rely on the Qods Mohajer (optionally the Raad 85), which when considering the range is the more likely suspect. What is important to know is that these devices requires a certain skill level, hence there is no way that a Houthi forces could have done this by themselves. It required Hezbollah/Iranian supervision. There the option of jamming and drones with a paint gun would work, if a jammer gets shot onto the drone, it will give them a way to follow, paint can have the same effect whilst at the same time limit its capabilities. If the drone is loaded with explosives and set for a one way trip there is a lot less to do, yet the paint could still impact its ability if there is enough space left, if the paint is loaded with metal it could light it up making it a much better target. All options that have been considered in the last few years in anti-drone activities, the question is how to proceed now.

I believe that inaction will no longer get us anywhere, especially when Hezbollah is involved. That is the one speculative part. There is no way that Houthi rebel forces have the skills; I believe that Iran is too focussed on having some level of deniability, hence the Hezbollah part. It is entirely probable that Iranian forces are involved, yet that would be the pilot and with the range, that pilot would have been really close to the Yemeni border making Abha airport a target, yet unlikely that more inland another target would be available to them.

Knowing that gives more options, but also makes it harder to proceed, the earlier five methods mentioned are direct, there is one other option, but I am not discussing it here at present as it optionally involves DoD classified materials (and involves DARPA’s project on Machine learning applied intelligence to the radio spectrum) and lets not put that part out in the open. It is actually a clever program conceived by Paul Tilghman, a graduate from RIT (Rochester Institute of Technology), an excellent school that is slightly below MIT and on par with UTS (my creative stomping grounds).

It is a roadmap that needs to be followed, I am all for bombing Hezbollah sites, unlike the earlier mentioned group of 15, I prefer my level of evidence to be a little higher as such the Tilghman solution is called for, after that, when we get that we can address the viability of Beirut and Tripoli with 2500 lbs hardware donations, depending on the evidence found mind you, we can make adjustments, as some materials would have needed to be shipped to Yemen either directly or via Lebanon and in all honesty, I am of the mind that Iran would not have done this directly. Proxy wars require a higher level of deniability to remain proxy wars; as such we need the hardware as evidence.

And even as we see: “Mohamed Abdel Salam, said the attack was in response to Saudi Arabia’s “continued aggression and blockade on Yemen”. Earlier in the week, he said attacks on Saudi airports were “the best way to break the blockade”” (at https://www.theguardian.com/world/2019/jun/12/yemen-houthi-rebel-missile-attack-injures-26-saudi-airport) we need to realise that this is growing and potentially a lot larger than before. Even as we acknowledge that the forces have withdrawn from the harbour, we have no insight on where they went, there is no indication that they have stopped fighting, merely that they are at the moment inactive, a status that can change at any given moment.

Add to that the threat (or is that the promise) by Tehran who decided to “threaten to resume enriching uranium towards weapons-grade level on 7 July if US sanctions are not lifted or its European allies fail to offer new terms for the nuclear deal“, here my answer is ‘What deal?‘, there is enough indication that enriching never stopped, but was merely scaled down to 95% of previous effort, as such there is no need to offer more incentives that will only be broken. As such my strategy to seek out Houthi (and optionally Hezbollah forces) to take away the proxy options of Iran, they must either commit 100% or back down, at present their fear is having to commit fully to this and change the stage of proxy war to actual war, and as such my strategy makes sense. They have no hope of winning as too many government would be willing to align with Saudi Arabia (that might make them surprised and happy as well), and a united front against Iran is what Iran fears, because Turkey would have no option but to cut ties out of fear what happens when we are done with the other Iranian puppets.

It is perhaps the only side where I disagree with James Jeffrey (US special representative for Syria engagement), I do not believe that it is a “hegemonic quest to dominate the Middle East“, I believe that Iran knows that this is no longer an option, yet bolstering foundations of a growing alliance is the best that they hope for and here Iran merely facilitates in the urge to state to Syria (the government and its current president) in the voice of ‘You owe us, we helped you‘, it is slightly pathetic and merely the voice of a used car salesman at present. As more of the proxy war becomes open and proven Iran is backed into a corner, it makes Iran more dangerous, but it also forces them to act, not through proxy and I am decently certain that Iran has too much to lose as present, especially as Russia denied them the S-400 solution.

Even as Gevorg Mirzayan (an expert in Middle East and a leading analyst at the agency Foreign Policy) is getting headlines with ‘‘Dumping’ Iran Would Be Mistaken, Since Russia Doesn’t Know What The US Will Offer In Return‘, we see that the stage is a valid question, but there we also see the answer. the direct (and somewhat less diplomatic) answer is “Never set a stage where a rabid dog can call the shots“, the more diplomatic answer (by Russian Deputy Prime Minister Yury Borisov) was “Russia has not received any requests from Iran for delivering its S-400 air defense systems” is nice, and it puts Iran in a space where they need to admit to needing this kind of hardware, yet on the other side, Russia realises that Iran is driven to flame the middle East and down the track if its alliance is too strong, takes Saudi Arabia out of consideration for several lucrative Russian ventures and they know it.

All these elements are in play and in place, so segregating and isolating Hezbollah limits the options of Iran, making it an essential step to pursue. Interesting is that these steps were firmly visible as early as last year August, and that group of 15 did little to bolster solutions towards truly isolating Iran, that Miaow division was optionally seeking milk and cream and finding not that much of either.

So the time is now essential moving to critical to take the options away from Iran, we let Lebanon decide whether they want to get caught in a room painted in a corner with no directions remaining, at that point they become a real easy target.

That was not hard was it?

Happy Friday and remember, it will be Monday morning in 60 hours, so make the most of it.

 

1 Comment

Filed under IT, Military, Politics, Science

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science

Data illusions

Yesterday was an interesting day for a few reasons; one of the primary reasons was an opinion piece in the Guardian by Jay Watts (@Shrink_at_Large). Like many article I considered to be in opposition, yet when I reread it, this piece has all kinds of hidden gems and I had to ponder a few items for an hour or so. I love that! Any piece, article or opinion that makes me rethink my position is a piece well worth reading. So this piece called ‘Supermarkets spy on them now‘ (at https://www.theguardian.com/commentisfree/2018/may/31/benefits-claimants-fear-supermarkets-spy-poor-disabled) has several sides that require us to think and rethink issues. As we see a quote like “some are happy to brush this off as no big deal” we identify with too many parts; to me and to many it is just that, no big deal, but behind the issues are secondary issues that are ignored by the masses (en mass as we might giggle), yet the truth is far from nice.

So what do we see in the first as primary and what is behind it as secondary? In the first we see the premise “if a patient with a diagnosis of paranoid schizophrenia told you that they were being watched by the Department for Work and Pensions (DWP), most mental health practitioners would presume this to be a sign of illness. This is not the case today.” It is not whether this is true or not, it is not a case of watching, being a watcher or even watching the watcher. It is what happens behind it all. So, when we recollect that dead dropped donkey called Cambridge Analytics, which was all based on interacting and engaging on fear. Consider what IBM and Google are able to do now through machine learning. This we see in an addition to a book from O’Reilly called ‘The Evolution of Analytics‘ by Patrick Hall, Wen Phan, and Katie Whitson. Here we see the direct impact of programs like SAS (Statistical Analysis System) in the application of machine learning, we see this on page 3 of Machine Learning in the Analytic Landscape (not a page 3 of the Sun by the way). Here we see for the government “Pattern recognition in images and videos enhance security and threat detection while the examination of transactions can spot healthcare fraud“, you might think it is no big deal. Yet you are forgetting that it is more than the so called implied ‘healthcare fraud‘. It is the abused setting of fraud in general and the eagerly awaited setting for ‘miscommunication’ whilst the people en mass are now set in a wrongly categorised world, a world where assumption takes control and scores of people are now pushed into the defence of their actions, an optional change towards ‘guilty until proven innocent’ whilst those making assumptions are clueless on many occasions, now are in an additional setting where they believe that they know exactly what they are doing. We have seen these kinds of bungles that impacted thousands of people in the UK and Australia. It seems that Canada has a better system where every letter with the content: ‘I am sorry to inform you, but it seems that your system made an error‘ tends to overthrow such assumptions (Yay for Canada today). So when we are confronted with: “The level of scrutiny all benefits claimants feel under is so brutal that it is no surprise that supermarket giant Sainsbury’s has a policy to share CCTV “where we are asked to do so by a public or regulatory authority such as the police or the Department for Work and Pensions”“, it is not merely the policy of Sainsbury, it is what places like the Department for Work and Pensions are going to do with machine learning and their version of classifications, whilst the foundation of true fraud is often not clear to them, so you want to set a system without clarity and hope that the machine will constitute learning through machine learning? It can never work, that evidence is seen as the initial classification of any person in a fluidic setting is altering on the best of conditions. Such systems are not able to deal with the chaotic life of any person not in a clear lifestyle cycle and people on pensions (trying to merely get by) as well as those who are physically or mentally unhealthy. These are merely three categories where all kind of cycles of chaos tend to intervene with their daily life. Those are now shown to be optionally targeted with not just a flawed system, but with a system where the transient workforce using those methods are unclear on what needs to be done as the need changes with every political administration. A system under such levels of basic change is too dangerous to get linked to any kind of machine learning. I believe that Jay Watts is not misinforming us; I feel that even the writer here has not yet touched on many unspoken dangers. There is no fault here by the one who gave us the opinion piece, I personally believe that the quote “they become imprisoned in their homes or in a mental state wherein they feel they are constantly being accused of being fraudulent or worthless” is incomplete, yet the setting I refer to is mentioned at the very end. You see, I believe that such systems will push suicide rates to an all-time high. I do not agree with “be too kind a phrase to describe what the Tories have done and are doing to claimants. It is worse than that: it is the post-apocalyptic bleakness of poverty combined with the persecution and terror of constantly feeling watched and accused“. I believe it to be wrong because this is a flaw on both sides of the political aisle. Their state of inaction for decades forced the issue out and as the NHS is out of money and is not getting any money the current administration is trying to find cash in any way that they can, because the coffers are empty, which now gets us to a BBC article from last year.

At http://www.bbc.com/news/election-2017-39980793, we saw “A survey in 2013 by Ipsos Mori suggested people believed that £24 out of every £100 spent on benefits was fraudulently claimed. What do you think – too high, too low?
Want to know the real answer? It is £1.10 for every £100
“. That is the dangerous political setting as we should see it; the assumption and believe that 24% is set to fraud when it is more realistic that 1% might be the actual figure. Let’s not be coy about it, because out of £172.3bn a 1% amount still remains a serious amount of cash, yet when you set it against the percentage of the UK population the amount becomes a mere £25 per person, it merely takes one prescription to get to that amount, one missed on the government side and one wrongly entered on the patients side and we are there. Yet in all that, how many prescriptions did you the reader require in the last year alone? When we get to that nitty gritty level we are confronted with the task where machine learning will not offer anything but additional resources to double check every claimant and offense. Now, we should all agree that machine learning and analyses will help in many ways, yet when it comes to ‘Claimants often feel unable to go out, attempt voluntary work or enjoy time with family for fear this will be used against them‘ we are confronted with a new level of data and when we merely look at the fear of voluntary work or being with family we need to consider what we have become. So in all this we see a rightful investment into a system that in the long run will help automate all kinds of things and help us to see where governments failed their social systems, we see a system that costs hundreds of millions, to look into an optional 1% loss, which at 10% of the losses might make perfect sense. Yet these systems are flawed from the very moment they are implemented because the setting is not rational, not realistic and in the end will bring more costs than any have considered from day one. So in the setting of finding ways to justify a 2015 ‘The Tories’ £12bn of welfare cuts could come back to haunt them‘, will not merely fail, it will add a £1 billion in costs of hardware, software and resources, whilst not getting the £12 billion in workable cutbacks, where exactly was the logic in that?

So when we are looking at the George Orwell edition of edition of ‘Twenty Eighteen‘, we all laugh and think it is no great deal, but the danger is actually two fold. The first I used and taught to students which gets us the loss of choice.

The setting is that a supermarket needs to satisfy the need of the customers and the survey they have they will keep items in a category (lollies for example) that are rated ‘fantastic value for money‘ and ‘great value for money‘, or the top 25th percentile of the products, whatever is the largest. So in the setting with 5,000 responses, the issue was that the 25th percentile now also included ‘decent value for money‘. So we get a setting where an additional 35 articles were kept in stock for the lollies category. This was the setting where I showed the value of what is known as User Missing Values. There were 423 people who had no opinion on lollies, who for whatever reason never bought those articles, This led to removing them from consideration, a choice merely based on actual responses; now the same situation gave us the 4,577 people gave us that the top 25th percentile only had ‘fantastic value for money‘ and ‘great value for money‘ and within that setting 35 articles were removed from that supermarket. Here we see the danger! What about those people who really loved one of those 35 articles, yet were not interviewed? The average supermarket does not have 5,000 visitors, it has depending on the location up to a thousand a day, more important, when we add a few elements and it is no longer about supermarkets, but government institutions and in addition it is not about lollies but Fraud classification? When we are set in a category of ‘Most likely to commit Fraud‘ and ‘Very likely to commit Fraud‘, whilst those people with a job and bankers are not included into the equation? So we get a diminished setting of Fraud from the very beginning.

Hold Stop!

What did I just say? Well, there is method to my madness. Two sources, the first called Slashdot.org (no idea who they were), gave us a reference to a 2009 book called ‘Insidious: How Trusted Employees Steal Millions and Why It’s So Hard for Banks to Stop Them‘ by B. C. Krishna and Shirley Inscoe (ISBN-13: 978-0982527207). Here we see “The financial crisis appears to be exacerbating fraud by bank employees: a new survey found that 72 percent of financial institutions say that in the last 12 months they have experienced a case of data theft by one of their workers“. Now, it is important to realise that I have no idea how reliable these numbers are, yet the book was published, so there will be a political player using this at some stage. This already tumbles to academic reliability of Fraud in general, now for an actual reliable source we see KPMG, who gave us last year “KPMG survey reveals surge in fraud in Australia“, with “For the period April 2016 to September 2016, the total value of frauds rose by 16 percent to a total of $442m, from $381m in the previous six month period” we see number, yet it is based on a survey and how reliable were those giving their view? How much was assumption, unrecognised numbers and based on ‘forecasted increases‘ that were not met? That issue was clearly brought to light by the Sydney Morning Herald in 2011 (at https://www.smh.com.au/technology/piracy-are-we-being-conned-20110322-1c4cs.html), where we see: “the Australian Content Industry Group (ACIG), released new statistics to The Age, which claimed piracy was costing Australian content industries $900 million a year and 8000 jobs“, yet the issue is not merely the numbers given, the larger issue is “the report, which is just 12 pages long, is fundamentally flawed. It takes a model provided by an earlier European piracy study (which itself has been thoroughly debunked) and attempts to shoe-horn in extrapolated Australian figures that are at best highly questionable and at worst just made up“, so the claim “4.7 million Australian internet users engaged in illegal downloading and this was set to increase to 8 million by 2016. By that time, the claimed losses to piracy would jump to $5.2 billion a year and 40,000 jobs” was a joke to say the least. There we see the issue of Fraud in another light, based on a different setting, the same model was used, and that is whilst I am more and more convinced that the European model was likely to be flawed as well (a small reference to the Dutch Buma/Stemra setting of 2007-2010). So not only are the models wrong, the entire exercise gives us something that was never going to be reliable in any way shape or form (personal speculation), so in this we now have the entire Machine learning, the political setting of Fraud as well as the speculated numbers involved, and what is ‘disregarded’ as Fraud. We will end up with a scenario where we get 70% false positives (a pure rough assumption on my side) in a collective where checking those numbers will never be realistic, and the moment the parameters are ‘leaked’ the actual fraudulent people will change their settings making detection of Fraud less and less likely.

How will this fix anything other than the revenue need of those selling machine learning? So when we look back at the chapter of Modern Applications of Machine Learning we see “Deploying machine learning models in real-time opens up opportunities to tackle safety issues, security threats, and financial risk immediately. Making these decisions usually involves embedding trained machine learning models into a streaming engine“, that is actually true, yet when we also consider “review some of the key organizational, data, infrastructure, modelling, and operational and production challenges that organizations must address to successfully incorporate machine learning into their analytic strategy“, the element of data and data quality is overlooked on several levels, making the entire setting, especially in light of the piece by Jay Watts a very dangerous one. So the full title, which is intentionally did not use in the beginning ‘No wonder people on benefits live in fear. Supermarkets spy on them now‘, is set wholly on the known and almost guaranteed premise that data quality and knowing that the players in this field are slightly too happy to generalise and trivialise the issue of data quality. The moment that comes to light and the implementers are held accountable for data quality is when all those now hyping machine learning, will change their tune instantly and give us all kinds of ‘party line‘ issues that they are not responsible for. Issues that I personally expect they did not really highlight when they were all about selling that system.

Until data cleaning and data vetting gets a much higher position in the analyses ladder, we are confronted with aggregated, weighted and ‘expected likelihood‘ generalisations and those who are ‘flagged’ via such systems will live in constant fear that their shallow way of life stops because a too high paid analyst stuffed up a weighting factor, condemning a few thousand people set to be tagged for all kind of reasons, not merely because they could be optionally part of a 1% that the government is trying to clamp down on, or was that 24%? We can believe the BBC, but can we believe their sources?

And if there is even a partial doubt on the BBC data, how unreliable are the aggregated government numbers?

Did I oversimplify the issue a little?

 

 

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The sting of history

There was an interesting article on the BBC (at http://www.bbc.com/news/business-43656378) a few days ago. I missed it initially as I tend to not dig too deep into the BBC past the breaking news points at times. Yet there it was, staring at me and I thought it was rather funny. You see ‘Google should not be in business of war, say employees‘, which is fair enough. Apart from the issue of them not being too great at waging war and roughing it out, it makes perfect sense to stay away from war. Yet is that possible? You see, the quote is funny when you see ‘No military projects‘, whilst we are all aware that the internet itself is an invention of DARPA, who came up with it as a solution that addressed “A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions“, which let to ARPANET and became the Internet. So now that the cat is out of the bag, we can continue. The objection they give is fair enough. When you are an engineer who is destined to create a world where everyone communicates to one another, the last thing you want to see is “Project Maven involves using artificial intelligence to improve the precision of military drone strikes“. I am not sure if Google could achieve it, but the goal is clear and so is the objection. The BBC article show merely one side, when we go to the source itself (at https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/), in this I saw the words from Marine Corps Colonel Drew Cukor: “Cukor described an algorithm as about 75 lines of Python code “placed inside a larger software-hardware container.” He said the immediate focus is 38 classes of objects that represent the kinds of things the department needs to detect, especially in the fight against the Islamic State of Iraq and Syria“. You see, I think he has been talking to the wrong people. Perhaps you remember the project SETI screensaver. “In May 1999 the University of California launched SETI@Home. SETI stands for the” Search for Extraterrestrial Intelligence,” Originally thought that it could at best recruit only a thousand or so participants, more than a million people actually signed up on the day and in the process overwhelmed the meager desktop PC that was set aside for this project“, I remember it because I was one of them. It is in that trend that “SETI@Home was built around the idea that people with personal computers who often leave them to do something else and then just let the screensaver run are actually wasting good computing resources. This was a good thing, as these ‘idle’ moments can actually be used to process the large amount of data that SETI collects from the galaxy” (source: Manilla Times), they were right. The design was brilliant and simple and it worked better than even the SETI people thought it would, but here we now see the application, where any android (OK, IOS too) device created after 2016 is pretty much a supercomputer at rest. You see, Drew Cukor is trying to look where he needs to look, it is a ‘flaw’ he has as well as the bulk of all the military. You see, when you look for a target that is 1 in 10,000, so he needs to hit the 0.01% mark. This is his choice and that is what he needs to do, I am merely stating that by figuring out where NOT to look, I am upping his chances. If I can set the premise of illuminating 7,500 false potential in a few seconds, his job went from a 0.01% chance to 0.04%, making his work 25 times easier and optionally faster. Perhaps the change could eliminate 8,500 or even 9,000 flags. Now we are talking the chances and the time frame we need. You see, it is the memo of Bob Work that does remain an issue. I disagree with “As numerous studies have made clear, the department of defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,“. The clear distinction is that those people tend to not rely on a smartphone, they rely on a simple Nokia 2100 burner phone and as such, there will be a complete absence of data, or will there be? As I see it, to tackle that, you need to be able to engage is what might be regarded as a ‘Snippet War‘, a war based on (a lot of) ‘small pieces of data or brief extracts‘. It is in one part cell tower connection patterns, it is in one part tracking IMEI (International Mobile Equipment Identity) codes and a part of sim switching. It is a jumble of patterns and normally getting anything done will be insane. Now what happens when we connect 100 supercomputers to one cell tower and mine all available tags? What happens when we can disseminate these packages and let all those supercomputers do the job? Merely 100 smart phones or even 1,000 smart phones per cell tower. At that point the war changes, because now we have an optional setting where on the spot data is offered in real time. Some might call it ‘the wet dream’ of Marine Corps Col. Drew Cukor and he was not ever aware that he was allowed to adult dream to that degree on the job, was he?

Even as these people are throwing AI around like it is Steven Spielberg’s chance to make a Kubrick movie, in the end it is a new scale and new level of machine learning, a combination of clustered flags and decentralised processing on a level that is not linked to any synchronicity. Part of this solution is not in the future, it was in the past. For that we need to read the original papers by Paul Baran in the early 60’s. I think we pushed forward to fast (a likely involuntary reaction). His concept of packet switching was not taken far enough, because the issues of then are nowhere near the issues of now. Consider raw data as a package and the transmission itself set the foundation of the data path that is to be created. So basically the package becomes the data entry point of raw data and the mobile phone processes this data on the fly, resetting the data parameters on the fly, giving instant rise to what is unlikely to be a threat and optionally what is), a setting where 90% could be parsed by the time it gets to the mining point. The interesting side is that the container for processing this could be set in the memory of most mobile phones without installing stuff as it is merely processing parsed data, not a nice, but essentially an optional solution to get a few hundred thousand mobiles to do in mere minutes what takes a day by most data centres, they merely receive the first level processed data, now it is a lot more interesting, as thousands are near a cell tower, that data keeps on being processed on the fly by supercomputers at rest all over the place.

So, we are not as Drew states ‘in an AI arms race‘, we are merely in a race to be clever on how we process data and we need to be clever on how to get these things done a lot faster. The fact that the foundation of that solution is 50 years old and still counts as an optional way in getting things done merely shows the brilliance of those who came before us. You see, that is where the military forgot the lessons of limitations. As we shun the old games like the CBM 64, and applaud the now of Ubisoft. We forget that Ubisoft shows to be graphically brilliant, having the resources of 4K camera’s, whilst those on the CBM-64 (Like Sid Meier) were actually brilliant for getting a workable interface that looked decent as they had the mere resources that were 0.000076293% of the resources that Ubisoft gets to work with me now. I am not here to attack Ubisoft, they are working with the resources available, I am addressing the utter brilliance of people like Sid Meier, David Braben, Richard Garriott, Peter Molyneux and a few others for being able to do what they did with the little they had. It is that simplicity and the added SETI@Home where we see the solutions that separates the children from the clever Machine learning programmers. It is not about “an algorithm of about 75 lines of Python code “placed inside a larger software-hardware container.”“, it is about where to set the slicer and how to do it whilst no one is able to say it is happening whilst remaining reliable in what it reports. It is not about a room or a shopping mall with 150 servers walking around the place, it is about the desktop no one notices who is able to keep tabs on those servers merely to keep the shops safe that is the part that matters. The need for brilliance is shown again in limitations when we realise why SETI@Home was designed. It opposes in directness the quote “The colonel described the technology available commercially, the state-of-the-art in computer vision, as “frankly … stunning,” thanks to work in the area by researchers and engineers at Stanford University, the University of California-Berkeley, Carnegie Mellon University and Massachusetts Institute of Technology, and a $36 billion investment last year across commercial industry“, the people at SETI had to get clever fast because they did not get access to $36 billion. How many of these players would have remained around if it was 0.36 billion, or even 0.036 billion? Not too many I reckon, the entire ‘the technology available commercially‘ would instantly fall away the moment the optional payoff remains null, void and unavailable. $36 billion investment implies that those ‘philanthropists’ are expecting a $360 billion payout at some point, call me a sceptic, but that is how I expect those people to roll.

The final ‘mistake’ that Marine Corps Col. Drew Cukor makes is one that he cannot be blamed for. He forgot that computers should again be taught to rough it out, just like the old computers did. The mistake I am referring to is not an actual mistake, it is more accurately the view, the missed perception he unintentionally has. The quote I am referring to is “Before deploying algorithms to combat zones, Cukor said, “you’ve got to have your data ready and you’ve got to prepare and you need the computational infrastructure for training.”“. He is not stating anything incorrect or illogical, he is merely wrong. You see, we need to realise the old days, the days of the mainframe. I got treated in the early 80’s to an ‘event’. You see a ‘box’ was delivered. It was the size of an A3 flatbed scanner, it had the weight of a small office safe (rather weighty that fucker was) and it looked like a print board on a metal box with a starter engine on top. It was pricey like a middle class car. It was a 100Mb Winchester Drive. Yes, 100Mb, the mere size of 4 iPhone X photographs. In those days data was super expensive, so the users and designers had to be really clever about data. This time is needed again, not because we have no storage, we have loads of it. We have to get clever again because there is too much data and we have to filter through too much of it, we need to get better fast because 5G is less than 2 years away and we will drown by that time in all that raw untested data, we need to reset our views and comprehend how the old ways of data worked and prevent Exabyte’s of junk per hour slowing us down, we need to redefine how tags can be used to set different markers, different levels of records. The old ways of hierarchical data was too cumbersome, but it was fast. The same is seen with BTree data (a really antiquated database approach), instantly passing through 50% data in every iteration. In this machine learning could be the key and the next person that comes up with that data solution would surpass the wealth of Mark Zuckerberg pretty much overnight. Data systems need to stop being ‘static’, it needs to be a fluidic and dynamic system, that evolves as data is added. Not because it is cleverer, but because of the amounts of data we need to get through is growing near exponentially per hour. It is there that we see that Google has a very good reason to be involved, not because of the song ‘Here come the drones‘, but because this level of data evolution is pushed upon nearly all and getting in the thick of things is when one remains the top dog and Google is very much about being top dog in that race, as it is servicing the ‘needs’ of billions and as such their own data centres will require loads of evolution, the old ways are getting closer and closer to becoming obsolete, Google needs to be ahead before that happens, and of course when that happens IBM will give a clear memo that they have been on top of it for years whilst trying to figure out how to best present the delays they are currently facing.
 

Leave a comment

Filed under IT, Media, Military, Science