That was the first thought I had whilst looking at the Guardian (at https://www.theguardian.com/technology/2018/mar/01/eu-facebook-google-youtube-twitter-extremist-content) where Andrus Ansip was staring back at me. So the EU is giving Facebook and Google three months to tackle extremist content. In what relation is that going to be a workable idea? You see, there are dozens of ways to hide and wrongfully classify video and images. To give you an idea of what Mr Ansip is missing, let me give you a few details.
YouTube
300 hours of video is uploaded every minute.
5 billion videos watched per day.
YouTube gets over 30 million visits a day.
Facebook
500+ terabytes of data added each day.
300 million photos per day
2.5 billion pieces of content added each day
This is merely the action of 2 companies. We have not even looked at Snapchat, Twitter, Google+, Qzone, Instagram, LinkedIn, Netlog and several others. The ones I mentioned have over 100,000,000 registered users and there are plenty more of that size. The largest issue is not the mere size, it is that in Common Law any part of Defamation and the defence of dissemination becomes a player in all this, in Australia it is covered in section 32 of the Defamation Act 2005, the UK, the US and pretty much every Common Law nation has its own version of it, so the EU is merely setting the trend of all the social media hubs to move out of the EU and into the UK, which is good for the UK. The European courts cannot just blanket approve this, because it is in its core an attack on Freedom of Speech and Freedom of expression. I agree that this is just insane, but that is how they had set it up for their liberal non-accountable friends and now that it works against them, they want to push the responsibility onto others? Seems a bit weird does it not? So when we see “Digital commissioner Andrus Ansip said: “While several platforms have been removing more illegal content than ever before … we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights.”“, my question becomes whether the man has any clue what he is doing. Whilst the EC is hiding behind their own propaganda with “European governments have said that extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised“, it pretty much ignored the reality of it all. When we look to the new-tech (at https://www.theverge.com/2017/4/18/15330042/tumblr-cabana-video-chat-app-announced-launches-ios), where a solution like Cabana allows for video and instructions whilst screen does not show an image of the watchers, but a piece of carton with texts like “مجنون”, “الجن”, “عسل”, “نهر”, “جمل” and “تاجر”. How long until the threshold of ‘extreme video‘ is triggered? How long until the system figures out that the meeting ended 3 weeks ago and that the video had encryption?
It seems to me that Andrus Ansip is on a fool’s errant. An engineering graduate that went into politics and now he is in a place where he is aware but not clued in to the extent he needs to be (OK that was a cruel comparison by me). In addition, I seriously doubt that he has the largest clue on the level of data parsing that such systems require to be, not merely to parse the data but systems like that will raise false flags, even at 0.01% false flags, that means sifting through 50Mb of data sifted through EVERY DAY. And that is not taking into account, framed Gifs, instead of video of JPG, or text, languages and interpreting text as extreme, so there will be language barriers as well. So in all this even with AI and machine learning, you would need to get the links. It becomes even more complex when Facebook or YouTube start receiving 4chan Video URL’s. So when I see “and other internet companies three months to show that they are removing extremist content more rapidly“, I see the first piece of clear evidence that the European Commission has lost control, they have no way of getting some of this done and they have no option to proceed. They have gone into blame mode with the ultimatum: ‘Do this or else‘. They are now going through the issues that the UK faced in the 60’s with Pirate radio. I remember listening to Radio Caroline in the evening, and there were so many more stations. In that regard, the movie The Boat That Rocked is one that Andrus Ansip should watch. He is the Sir Alistair Dormandy, a strict government minister who endeavours to shut down pirate radio stations in all this. A role nicely played by Kenneth Brannagh I might add. The movie shows just how useless the current exercise is. Now, I am all for finding solutions against extremist video, but when you consider that a small player like Heavy.com had an extreme video online for well over a year (I had the link in a previous article), whilst having no more than a few hundred video’s a week and we see this demand. How ludicrous is the exercise we see now?
The problem is not merely the online extremist materials, it is also the setting of when exactly it becomes ‘extremist‘, as well as realising that when it is a link that goes to a ‘dedicated’ chat group the lone wolves avoid all scrutiny and nothing is found until it is much too late, yet the politicians are hiding behind this puppet presentation, because that is what they tend to do.
So when we look at “It also urged the predominantly US-dominated technology sector to adopt a more proactive approach, with automated systems to detect and remove illegal content, something Facebook and Google have been pushing as the most effective way of dealing with the issue. However, the European Digital Rights group described the Commission’s approach as putting internet giants in charge of censoring Europe, saying that only legislation would ensure democratic scrutiny and judicial review“, we see dangers. That is because, ‘automated systems aren’t‘, ‘censoring can’t‘ and ‘democratic scrutiny won’t‘; three basic elemental issues we are confronted with for most of our teenage life and after that too. So there are already three foundational issues with a system that has to deal with more stored data than we have seen in a history spanning 20 years of spam, yet here we see the complication that we need to find the needle in a field full of haystacks and we have no idea which stack to look in, whether the needle is a metal one and how large it is. Anyone coming to you with: ‘a simple automated system is the solution’ has no idea on what a solution is, has no idea how to automate it and has never seen the scope of data in the matter, so good luck with that approach!
So when we are confronted with “The UK government recently unveiled its own AI-powered system for tackling the spread of extremist propaganda online, which it said would be offered to smaller firms that have seen an increase in terrorist use as they seek to avoid action by the biggest US firms“, I see another matter. You see, the issues and options I gave earlier are already circumventing to the larger degree “The technology could stop the majority of Isis videos from reaching the internet by analysing the audio and images of a video file during the uploading process, and rejecting extremist content“, what is stated (at https://www.theguardian.com/uk-news/2018/feb/13/home-office-unveils-ai-program-to-tackle-isis-online-propaganda), until that upload solution is pushed to 100% of all firms, so good luck with that. In equal measure we see “The AI technology has been trained by analysing more than 1,000 Isis videos, automatically detecting 94% of propaganda with a 99.99% success rate” and here I wonder that if ISIS changes its format, and the way it gives the information (another reference to the Heavy.com video), will the solution still work or will the makers need to upgrade their video solution.
They are meaningless whilst chasing our tails in this and even as I agree that a solution is required, we see the internet as an open system where everyone is watching the front door, but when one person enters the building through the window, the solution stops working. So what happens when someone starts making a new codec encoder that has two movies? Remember the old ‘gimmicky‘ multi angle DVD’s? Was that option provided for? how about video in video (picture in picture variant), the problem there is that with new programming frameworks it becomes easier to set the stage into multi-tier productions, not merely encoding, but a two stage decoder where only the receiver can see the message. So the setting of “extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised” is unlikely to be stopped, moreover, there is every chance that they never became a blip on the radar. In that same setting when we see “If the platform were to process 1m randomly selected videos, only 50 would require additional human review“, from the Daily statistics we get that 300 hours of video is uploaded every minute, so in that regard, we get a total of 26 million hours of video to parse, so if every movie was 2 minutes, we get to parse 21 million videos every day and that means over 1000 movies require vetting every day, from merely one provider. Now that seems like an optional solution, yet what if the signal changes? What if the vetting is a much larger problem? Don’t forget it is not merely extremist videos that get flagged, but copyrighted materials too. When we see that the average video length was 4 minutes and 20 seconds, whilst the range is between 42 seconds and 9:15, how will the numbers shift? This is a daily issue and the numbers are rising, as well as the providers and let’s not forget that this is ONE supplier only. That is the data we are confronted with, so there are a whole lot of issues that are not covered at all. So the two articles read like the political engines are playing possum with reality. And all this is even before the consideration that a hostile player could make internet servers available for extremists, the dark web that is not patrolled at all (read: almost impossible to do so) as well as lazy IT people who did not properly configure their servers and an extremist sympathiser has set up a secondary non indexed domain to upload files. All solutions where the so called anti-ISIS AI has been circumvented, and that is merely the tip of the iceberg.
So I have an issue with the messaging and the issues presented by those who think they have a solution and those who will callously blame the disseminators in all this, whilst the connected players know that this was never a realistic exercise in any part of this, merely the need and the desire to monitor it all and the articles given show that they are clueless (to some extent), which is news we never wanted ISIS to know in the first place. In that regard, when we see news that is a year old, where ISIS was mentioned that they use Twitter to recruit, merely through messaging and monitoring, we see another part where these systems have failed, because a question like that could be framed in many ways. It is almost the setting where the creative mind can ask more questions than any AI can comprehend, that first realisation is important to realise how empty the entire setting of these ‘solutions’ are, In my personal view is that Andrus Ansip has a job that has become nothing more than a temporary castle in the sand before it is washed away by the tide. It is unlikely that this is his choice or desire, but that is how it has become, and there is supporting evidence. Take a look at the Washington Post article (at https://www.washingtonpost.com/news/the-intersect/wp/2014/09/25/absolutely-everything-you-need-to-know-to-understand-4chan-the-internets-own-bogeyman/?utm_term=.35c366cd91eb), where we see “participants can say and do virtually anything they want with only the most remote threat of accountability“, more important, monitoring that part is not impossible yet would require large resources, 4chan is equally a worry to some extend and what happens when ISIS merely downloads a 4chat or 4chan skeleton and places it on the dark web? There is close to no options to ever find them at that point, two simple acts to circumvent the entire circus, a part that Andrus Ansip should have (and he might have) informed the EC commissioners on, so we see the waste of large amounts of money and in the end there will be nothing to show for. Is that what we want to happen to keep ourselves safe? So when the ISIS person needs nothing but a mobile phone and a TOR browser how will we find them and stop the content? Well, there is a two letter word for that. NO! It ain’t happening baby, a mere realisation that can be comprehended by most people in the smallest amount of time.
By the way, when 5G hits us in less than 18 months, with the speeds, the bandwidth and the upload options as well as additional new forms if media, which optionally means new automated forms of Social Media, how much redesign will be required? In my personal book this reads like: “the chance that Europe will be introduced to a huge invoice for the useless application of a non-working solution, twice!” How you feel about that part?
In my view it is not about stopping the upload, it is about getting clever on how the information reaches those who desire, want and optionally need the information. We need to get a grip on that reality and see how we can get there, because the current method is not working. In that regard we can take a grip towards history, where in the Netherlands Aage Meinesz used a thermal lance to go through the concrete next to the vault door, he did that in the early 70’s. So when we see the solutions we saw earlier, we need to remember that this solution only works until 10 seconds after someone else realises that there was a way to ignore the need of an upload, or realise that the system is assuming certain parts. You only need to look through Fatal Vision Alcohol goggles once, to realise that it does not only distort view, it could potentially be used to counter a distorted view, I wonder how those AI solutions comprehend that and consider that with every iteration accuracy decreases, human intervention increases and less gets achieved, some older gimmicks in photography relied on such paths to entice the watchers (like the old Betty Page books with red and green glasses). I could go on for hours, and with every other part more and more flaws are found. In all this it is equally a worry to push this onto those tech companies. It is the old premise of being prepared for that what you do not know, that what you cannot see and that what is not there. The demand of the conundrum, one that Military Intelligence was faced with for over 30 years and the solution needs to be presented in three months.
The request has to be adhered to in three months, it is ludicrous and unrealistic, whilst in addition the demands shows a level of discrimination as there is a massive size of social media enablers that are not involved; there are creators of technology providers that are not accountable to any level. For example Apple, Samsung, Microsoft and IBM (as they are not internet companies), yet some of them proclaim their Deep Blue, Azure and whatever other massive data mining solution provider in a box for ‘everyone’, so where are they in all this? When we consider those parts, how empty is the “face legislation forcing them to do so” threat?
It becomes even more hilarious, when you consider the setting in full, so Andrus Ansip, the current European Commissioner for Digital Single Market is giving us this, whilst we see (at https://ec.europa.eu/commission/priorities/digital-single-market_en) that the European Commission for Digital single market has there on its page the priority for ‘Bringing down barriers to unlock online opportunities’, which they use to create barriers, preferably flexible barriers and in the end it is the creation on opportunities for a very small group of designers and whilst we see that ‘protect children and tackle hate speech‘ is the smallest part of one element in a setting with 7 additional setting on a much larger scale. It seems to me that in this case Andrus Ansip is trying to extent his reach by the size of a continent, it does not add up on several sides, especially when you consider that the documents setting in that commission has nothing past September 2017, which makes the entire setting of pushing social media tech groups as a wishful thinking one, and one that was never realistic to begin with, it’s like he merely chasing windmills, just like Don Quichotte.