There was an interesting article on the BBC (at http://www.bbc.com/news/business-43656378) a few days ago. I missed it initially as I tend to not dig too deep into the BBC past the breaking news points at times. Yet there it was, staring at me and I thought it was rather funny. You see ‘Google should not be in business of war, say employees‘, which is fair enough. Apart from the issue of them not being too great at waging war and roughing it out, it makes perfect sense to stay away from war. Yet is that possible? You see, the quote is funny when you see ‘No military projects‘, whilst we are all aware that the internet itself is an invention of DARPA, who came up with it as a solution that addressed “A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions“, which let to ARPANET and became the Internet. So now that the cat is out of the bag, we can continue. The objection they give is fair enough. When you are an engineer who is destined to create a world where everyone communicates to one another, the last thing you want to see is “Project Maven involves using artificial intelligence to improve the precision of military drone strikes“. I am not sure if Google could achieve it, but the goal is clear and so is the objection. The BBC article show merely one side, when we go to the source itself (at https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/), in this I saw the words from Marine Corps Colonel Drew Cukor: “Cukor described an algorithm as about 75 lines of Python code “placed inside a larger software-hardware container.” He said the immediate focus is 38 classes of objects that represent the kinds of things the department needs to detect, especially in the fight against the Islamic State of Iraq and Syria“. You see, I think he has been talking to the wrong people. Perhaps you remember the project SETI screensaver. “In May 1999 the University of California launched SETI@Home. SETI stands for the” Search for Extraterrestrial Intelligence,” Originally thought that it could at best recruit only a thousand or so participants, more than a million people actually signed up on the day and in the process overwhelmed the meager desktop PC that was set aside for this project“, I remember it because I was one of them. It is in that trend that “SETI@Home was built around the idea that people with personal computers who often leave them to do something else and then just let the screensaver run are actually wasting good computing resources. This was a good thing, as these ‘idle’ moments can actually be used to process the large amount of data that SETI collects from the galaxy” (source: Manilla Times), they were right. The design was brilliant and simple and it worked better than even the SETI people thought it would, but here we now see the application, where any android (OK, IOS too) device created after 2016 is pretty much a supercomputer at rest. You see, Drew Cukor is trying to look where he needs to look, it is a ‘flaw’ he has as well as the bulk of all the military. You see, when you look for a target that is 1 in 10,000, so he needs to hit the 0.01% mark. This is his choice and that is what he needs to do, I am merely stating that by figuring out where NOT to look, I am upping his chances. If I can set the premise of illuminating 7,500 false potential in a few seconds, his job went from a 0.01% chance to 0.04%, making his work 25 times easier and optionally faster. Perhaps the change could eliminate 8,500 or even 9,000 flags. Now we are talking the chances and the time frame we need. You see, it is the memo of Bob Work that does remain an issue. I disagree with “As numerous studies have made clear, the department of defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,“. The clear distinction is that those people tend to not rely on a smartphone, they rely on a simple Nokia 2100 burner phone and as such, there will be a complete absence of data, or will there be? As I see it, to tackle that, you need to be able to engage is what might be regarded as a ‘Snippet War‘, a war based on (a lot of) ‘small pieces of data or brief extracts‘. It is in one part cell tower connection patterns, it is in one part tracking IMEI (International Mobile Equipment Identity) codes and a part of sim switching. It is a jumble of patterns and normally getting anything done will be insane. Now what happens when we connect 100 supercomputers to one cell tower and mine all available tags? What happens when we can disseminate these packages and let all those supercomputers do the job? Merely 100 smart phones or even 1,000 smart phones per cell tower. At that point the war changes, because now we have an optional setting where on the spot data is offered in real time. Some might call it ‘the wet dream’ of Marine Corps Col. Drew Cukor and he was not ever aware that he was allowed to adult dream to that degree on the job, was he?
Even as these people are throwing AI around like it is Steven Spielberg’s chance to make a Kubrick movie, in the end it is a new scale and new level of machine learning, a combination of clustered flags and decentralised processing on a level that is not linked to any synchronicity. Part of this solution is not in the future, it was in the past. For that we need to read the original papers by Paul Baran in the early 60’s. I think we pushed forward to fast (a likely involuntary reaction). His concept of packet switching was not taken far enough, because the issues of then are nowhere near the issues of now. Consider raw data as a package and the transmission itself set the foundation of the data path that is to be created. So basically the package becomes the data entry point of raw data and the mobile phone processes this data on the fly, resetting the data parameters on the fly, giving instant rise to what is unlikely to be a threat and optionally what is), a setting where 90% could be parsed by the time it gets to the mining point. The interesting side is that the container for processing this could be set in the memory of most mobile phones without installing stuff as it is merely processing parsed data, not a nice, but essentially an optional solution to get a few hundred thousand mobiles to do in mere minutes what takes a day by most data centres, they merely receive the first level processed data, now it is a lot more interesting, as thousands are near a cell tower, that data keeps on being processed on the fly by supercomputers at rest all over the place.
So, we are not as Drew states ‘in an AI arms race‘, we are merely in a race to be clever on how we process data and we need to be clever on how to get these things done a lot faster. The fact that the foundation of that solution is 50 years old and still counts as an optional way in getting things done merely shows the brilliance of those who came before us. You see, that is where the military forgot the lessons of limitations. As we shun the old games like the CBM 64, and applaud the now of Ubisoft. We forget that Ubisoft shows to be graphically brilliant, having the resources of 4K camera’s, whilst those on the CBM-64 (Like Sid Meier) were actually brilliant for getting a workable interface that looked decent as they had the mere resources that were 0.000076293% of the resources that Ubisoft gets to work with me now. I am not here to attack Ubisoft, they are working with the resources available, I am addressing the utter brilliance of people like Sid Meier, David Braben, Richard Garriott, Peter Molyneux and a few others for being able to do what they did with the little they had. It is that simplicity and the added SETI@Home where we see the solutions that separates the children from the clever Machine learning programmers. It is not about “an algorithm of about 75 lines of Python code “placed inside a larger software-hardware container.”“, it is about where to set the slicer and how to do it whilst no one is able to say it is happening whilst remaining reliable in what it reports. It is not about a room or a shopping mall with 150 servers walking around the place, it is about the desktop no one notices who is able to keep tabs on those servers merely to keep the shops safe that is the part that matters. The need for brilliance is shown again in limitations when we realise why SETI@Home was designed. It opposes in directness the quote “The colonel described the technology available commercially, the state-of-the-art in computer vision, as “frankly … stunning,” thanks to work in the area by researchers and engineers at Stanford University, the University of California-Berkeley, Carnegie Mellon University and Massachusetts Institute of Technology, and a $36 billion investment last year across commercial industry“, the people at SETI had to get clever fast because they did not get access to $36 billion. How many of these players would have remained around if it was 0.36 billion, or even 0.036 billion? Not too many I reckon, the entire ‘the technology available commercially‘ would instantly fall away the moment the optional payoff remains null, void and unavailable. $36 billion investment implies that those ‘philanthropists’ are expecting a $360 billion payout at some point, call me a sceptic, but that is how I expect those people to roll.
The final ‘mistake’ that Marine Corps Col. Drew Cukor makes is one that he cannot be blamed for. He forgot that computers should again be taught to rough it out, just like the old computers did. The mistake I am referring to is not an actual mistake, it is more accurately the view, the missed perception he unintentionally has. The quote I am referring to is “Before deploying algorithms to combat zones, Cukor said, “you’ve got to have your data ready and you’ve got to prepare and you need the computational infrastructure for training.”“. He is not stating anything incorrect or illogical, he is merely wrong. You see, we need to realise the old days, the days of the mainframe. I got treated in the early 80’s to an ‘event’. You see a ‘box’ was delivered. It was the size of an A3 flatbed scanner, it had the weight of a small office safe (rather weighty that fucker was) and it looked like a print board on a metal box with a starter engine on top. It was pricey like a middle class car. It was a 100Mb Winchester Drive. Yes, 100Mb, the mere size of 4 iPhone X photographs. In those days data was super expensive, so the users and designers had to be really clever about data. This time is needed again, not because we have no storage, we have loads of it. We have to get clever again because there is too much data and we have to filter through too much of it, we need to get better fast because 5G is less than 2 years away and we will drown by that time in all that raw untested data, we need to reset our views and comprehend how the old ways of data worked and prevent Exabyte’s of junk per hour slowing us down, we need to redefine how tags can be used to set different markers, different levels of records. The old ways of hierarchical data was too cumbersome, but it was fast. The same is seen with BTree data (a really antiquated database approach), instantly passing through 50% data in every iteration. In this machine learning could be the key and the next person that comes up with that data solution would surpass the wealth of Mark Zuckerberg pretty much overnight. Data systems need to stop being ‘static’, it needs to be a fluidic and dynamic system, that evolves as data is added. Not because it is cleverer, but because of the amounts of data we need to get through is growing near exponentially per hour. It is there that we see that Google has a very good reason to be involved, not because of the song ‘Here come the drones‘, but because this level of data evolution is pushed upon nearly all and getting in the thick of things is when one remains the top dog and Google is very much about being top dog in that race, as it is servicing the ‘needs’ of billions and as such their own data centres will require loads of evolution, the old ways are getting closer and closer to becoming obsolete, Google needs to be ahead before that happens, and of course when that happens IBM will give a clear memo that they have been on top of it for years whilst trying to figure out how to best present the delays they are currently facing.