Tag Archives: Defamation act 2005

A windmill concussion

That was the first thought I had whilst looking at the Guardian (at https://www.theguardian.com/technology/2018/mar/01/eu-facebook-google-youtube-twitter-extremist-content) where Andrus Ansip was staring back at me. So the EU is giving Facebook and Google three months to tackle extremist content. In what relation is that going to be a workable idea? You see, there are dozens of ways to hide and wrongfully classify video and images. To give you an idea of what Mr Ansip is missing, let me give you a few details.

YouTube
300 hours of video is uploaded every minute.
5 billion videos watched per day.
YouTube gets over 30 million visits a day.

Facebook
500+ terabytes of data added each day.
300 million photos per day
2.5 billion pieces of content added each day

This is merely the action of 2 companies. We have not even looked at Snapchat, Twitter, Google+, Qzone, Instagram, LinkedIn, Netlog and several others. The ones I mentioned have over 100,000,000 registered users and there are plenty more of that size. The largest issue is not the mere size, it is that in Common Law any part of Defamation and the defence of dissemination becomes a player in all this, in Australia it is covered in section 32 of the Defamation Act 2005, the UK, the US and pretty much every Common Law nation has its own version of it, so the EU is merely setting the trend of all the social media hubs to move out of the EU and into the UK, which is good for the UK. The European courts cannot just blanket approve this, because it is in its core an attack on Freedom of Speech and Freedom of expression. I agree that this is just insane, but that is how they had set it up for their liberal non-accountable friends and now that it works against them, they want to push the responsibility onto others? Seems a bit weird does it not? So when we see “Digital commissioner Andrus Ansip said: “While several platforms have been removing more illegal content than ever before … we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights.”“, my question becomes whether the man has any clue what he is doing. Whilst the EC is hiding behind their own propaganda with “European governments have said that extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised“, it pretty much ignored the reality of it all. When we look to the new-tech (at https://www.theverge.com/2017/4/18/15330042/tumblr-cabana-video-chat-app-announced-launches-ios), where a solution like Cabana allows for video and instructions whilst screen does not show an image of the watchers, but a piece of carton with texts like “مجنون”, “الجن”, “عسل”, “نهر”, “جمل” and “تاجر”. How long until the threshold of ‘extreme video‘ is triggered? How long until the system figures out that the meeting ended 3 weeks ago and that the video had encryption?

It seems to me that Andrus Ansip is on a fool’s errant. An engineering graduate that went into politics and now he is in a place where he is aware but not clued in to the extent he needs to be (OK that was a cruel comparison by me). In addition, I seriously doubt that he has the largest clue on the level of data parsing that such systems require to be, not merely to parse the data but systems like that will raise false flags, even at 0.01% false flags, that means sifting through 50Mb of data sifted through EVERY DAY. And that is not taking into account, framed Gifs, instead of video of JPG, or text, languages and interpreting text as extreme, so there will be language barriers as well. So in all this even with AI and machine learning, you would need to get the links. It becomes even more complex when Facebook or YouTube start receiving 4chan Video URL’s. So when I see “and other internet companies three months to show that they are removing extremist content more rapidly“, I see the first piece of clear evidence that the European Commission has lost control, they have no way of getting some of this done and they have no option to proceed. They have gone into blame mode with the ultimatum: ‘Do this or else‘. They are now going through the issues that the UK faced in the 60’s with Pirate radio. I remember listening to Radio Caroline in the evening, and there were so many more stations. In that regard, the movie The Boat That Rocked is one that Andrus Ansip should watch. He is the Sir Alistair Dormandy, a strict government minister who endeavours to shut down pirate radio stations in all this. A role nicely played by Kenneth Brannagh I might add. The movie shows just how useless the current exercise is. Now, I am all for finding solutions against extremist video, but when you consider that a small player like Heavy.com had an extreme video online for well over a year (I had the link in a previous article), whilst having no more than a few hundred video’s a week and we see this demand. How ludicrous is the exercise we see now?

The problem is not merely the online extremist materials, it is also the setting of when exactly it becomes ‘extremist‘, as well as realising that when it is a link that goes to a ‘dedicated’ chat group the lone wolves avoid all scrutiny and nothing is found until it is much too late, yet the politicians are hiding behind this puppet presentation, because that is what they tend to do.

So when we look at “It also urged the predominantly US-dominated technology sector to adopt a more proactive approach, with automated systems to detect and remove illegal content, something Facebook and Google have been pushing as the most effective way of dealing with the issue. However, the European Digital Rights group described the Commission’s approach as putting internet giants in charge of censoring Europe, saying that only legislation would ensure democratic scrutiny and judicial review“, we see dangers. That is because, ‘automated systems aren’t‘, ‘censoring can’t‘ and ‘democratic scrutiny won’t‘; three basic elemental issues we are confronted with for most of our teenage life and after that too. So there are already three foundational issues with a system that has to deal with more stored data than we have seen in a history spanning 20 years of spam, yet here we see the complication that we need to find the needle in a field full of haystacks and we have no idea which stack to look in, whether the needle is a metal one and how large it is. Anyone coming to you with: ‘a simple automated system is the solution’ has no idea on what a solution is, has no idea how to automate it and has never seen the scope of data in the matter, so good luck with that approach!

So when we are confronted with “The UK government recently unveiled its own AI-powered system for tackling the spread of extremist propaganda online, which it said would be offered to smaller firms that have seen an increase in terrorist use as they seek to avoid action by the biggest US firms“, I see another matter. You see, the issues and options I gave earlier are already circumventing to the larger degree “The technology could stop the majority of Isis videos from reaching the internet by analysing the audio and images of a video file during the uploading process, and rejecting extremist content“, what is stated (at https://www.theguardian.com/uk-news/2018/feb/13/home-office-unveils-ai-program-to-tackle-isis-online-propaganda), until that upload solution is pushed to 100% of all firms, so good luck with that. In equal measure we see “The AI technology has been trained by analysing more than 1,000 Isis videos, automatically detecting 94% of propaganda with a 99.99% success rate” and here I wonder that if ISIS changes its format, and the way it gives the information (another reference to the Heavy.com video), will the solution still work or will the makers need to upgrade their video solution.

They are meaningless whilst chasing our tails in this and even as I agree that a solution is required, we see the internet as an open system where everyone is watching the front door, but when one person enters the building through the window, the solution stops working. So what happens when someone starts making a new codec encoder that has two movies? Remember the old ‘gimmicky‘ multi angle DVD’s? Was that option provided for? how about video in video (picture in picture variant), the problem there is that with new programming frameworks it becomes easier to set the stage into multi-tier productions, not merely encoding, but a two stage decoder where only the receiver can see the message. So the setting of “extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised” is unlikely to be stopped, moreover, there is every chance that they never became a blip on the radar. In that same setting when we see “If the platform were to process 1m randomly selected videos, only 50 would require additional human review“, from the Daily statistics we get that 300 hours of video is uploaded every minute, so in that regard, we get a total of 26 million hours of video to parse, so if every movie was 2 minutes, we get to parse 21 million videos every day and that means over 1000 movies require vetting every day, from merely one provider. Now that seems like an optional solution, yet what if the signal changes? What if the vetting is a much larger problem? Don’t forget it is not merely extremist videos that get flagged, but copyrighted materials too. When we see that the average video length was 4 minutes and 20 seconds, whilst the range is between 42 seconds and 9:15, how will the numbers shift? This is a daily issue and the numbers are rising, as well as the providers and let’s not forget that this is ONE supplier only. That is the data we are confronted with, so there are a whole lot of issues that are not covered at all. So the two articles read like the political engines are playing possum with reality. And all this is even before the consideration that a hostile player could make internet servers available for extremists, the dark web that is not patrolled at all (read: almost impossible to do so) as well as lazy IT people who did not properly configure their servers and an extremist sympathiser has set up a secondary non indexed domain to upload files. All solutions where the so called anti-ISIS AI has been circumvented, and that is merely the tip of the iceberg.

So I have an issue with the messaging and the issues presented by those who think they have a solution and those who will callously blame the disseminators in all this, whilst the connected players know that this was never a realistic exercise in any part of this, merely the need and the desire to monitor it all and the articles given show that they are clueless (to some extent), which is news we never wanted ISIS to know in the first place. In that regard, when we see news that is a year old, where ISIS was mentioned that they use Twitter to recruit, merely through messaging and monitoring, we see another part where these systems have failed, because a question like that could be framed in many ways. It is almost the setting where the creative mind can ask more questions than any AI can comprehend, that first realisation is important to realise how empty the entire setting of these ‘solutions’ are, In my personal view is that Andrus Ansip has a job that has become nothing more than a temporary castle in the sand before it is washed away by the tide. It is unlikely that this is his choice or desire, but that is how it has become, and there is supporting evidence. Take a look at the Washington Post article (at https://www.washingtonpost.com/news/the-intersect/wp/2014/09/25/absolutely-everything-you-need-to-know-to-understand-4chan-the-internets-own-bogeyman/?utm_term=.35c366cd91eb), where we see “participants can say and do virtually anything they want with only the most remote threat of accountability“, more important, monitoring that part is not impossible yet would require large resources, 4chan is equally a worry to some extend and what happens when ISIS merely downloads a 4chat or 4chan skeleton and places it on the dark web? There is close to no options to ever find them at that point, two simple acts to circumvent the entire circus, a part that Andrus Ansip should have (and he might have) informed the EC commissioners on, so we see the waste of large amounts of money and in the end there will be nothing to show for. Is that what we want to happen to keep ourselves safe? So when the ISIS person needs nothing but a mobile phone and a TOR browser how will we find them and stop the content? Well, there is a two letter word for that. NO! It ain’t happening baby, a mere realisation that can be comprehended by most people in the smallest amount of time.

By the way, when 5G hits us in less than 18 months, with the speeds, the bandwidth and the upload options as well as additional new forms if media, which optionally means new automated forms of Social Media, how much redesign will be required? In my personal book this reads like: “the chance that Europe will be introduced to a huge invoice for the useless application of a non-working solution, twice!” How you feel about that part?

In my view it is not about stopping the upload, it is about getting clever on how the information reaches those who desire, want and optionally need the information. We need to get a grip on that reality and see how we can get there, because the current method is not working. In that regard we can take a grip towards history, where in the Netherlands Aage Meinesz used a thermal lance to go through the concrete next to the vault door, he did that in the early 70’s. So when we see the solutions we saw earlier, we need to remember that this solution only works until 10 seconds after someone else realises that there was a way to ignore the need of an upload, or realise that the system is assuming certain parts. You only need to look through Fatal Vision Alcohol goggles once, to realise that it does not only distort view, it could potentially be used to counter a distorted view, I wonder how those AI solutions comprehend that and consider that with every iteration accuracy decreases, human intervention increases and less gets achieved, some older gimmicks in photography relied on such paths to entice the watchers (like the old Betty Page books with red and green glasses). I could go on for hours, and with every other part more and more flaws are found. In all this it is equally a worry to push this onto those tech companies. It is the old premise of being prepared for that what you do not know, that what you cannot see and that what is not there. The demand of the conundrum, one that Military Intelligence was faced with for over 30 years and the solution needs to be presented in three months.

The request has to be adhered to in three months, it is ludicrous and unrealistic, whilst in addition the demands shows a level of discrimination as there is a massive size of social media enablers that are not involved; there are creators of technology providers that are not accountable to any level. For example Apple, Samsung, Microsoft and IBM (as they are not internet companies), yet some of them proclaim their Deep Blue, Azure and whatever other massive data mining solution provider in a box for ‘everyone’, so where are they in all this? When we consider those parts, how empty is the “face legislation forcing them to do so” threat?

It becomes even more hilarious, when you consider the setting in full, so Andrus Ansip, the current European Commissioner for Digital Single Market is giving us this, whilst we see (at https://ec.europa.eu/commission/priorities/digital-single-market_en) that the European Commission for Digital single market has there on its page the priority for ‘Bringing down barriers to unlock online opportunities’, which they use to create barriers, preferably flexible barriers and in the end it is the creation on opportunities for a very small group of designers and whilst we see that ‘protect children and tackle hate speech‘ is the smallest part of one element in a setting with 7 additional setting on a much larger scale. It seems to me that in this case Andrus Ansip is trying to extent his reach by the size of a continent, it does not add up on several sides, especially when you consider that the documents setting in that commission has nothing past September 2017, which makes the entire setting of pushing social media tech groups as a wishful thinking one, and one that was never realistic to begin with, it’s like he merely chasing windmills, just like Don Quichotte.

 

Leave a comment

Filed under IT, Media, Politics, Science

The Zuckergate Censorberg Act

Yesterday an interesting issue got to the FrontPage of the Norwegian Aftenposten (at http://www.aftenposten.no/kultur/Aftenposten-redaktor-om-snuoperasjonen–En-fornuftig-avgjorelse-av-Facebook-604237b.html) and for those who are slightly Norwegian linguistically challenged, there is an English version at https://www.theguardian.com/technology/2016/sep/08/facebook-mark-zuckerberg-napalm-girl-photo-vietnam-war.

aftenposten
It is something we have seen before. Although from a technical point of view, the editing (read: initial flag) is likely to have been done electronically, the added blame we see when we get to the quote “Egeland was subsequently suspended from Facebook. When Aftenposten reported on the suspension – using the same photograph in its article, which was then shared on the publication’s Facebook page – the newspaper received a message from Facebook asking it to “either remove or pixelize” the photograph” shows that this is an entirely different matter. This is now a censoring engine that is out of control. The specification ‘either remove or pixelize’ does not cut it, especially when it concerns a historical photo that was given a Pulitzer.

I am actually considering that there is more in play, you see, the Atlantic (at http://www.theatlantic.com/technology/archive/2016/05/facebook-isnt-fair/482610/) said it in May when it published “Facebook Doesn’t Have to Be Fair. The company has no legal obligation to be balanced—and lawmakers know it“, which is the title and subtitle and as such, the story is told and politicians like John Thune experienced how a social network can drown out whatever it wants (within reason). So when you see something is trending on Facebook, you must comprehend that it is not an algorithm, but contracted people guide its creation and as quotes in the Atlantic “routinely suppressed conservative news“. Yet this goes further than just censorship and news. As the Editor of Aftenposten raises (and others with him), Mark Zuckerberg has now become the most powerful editor in the world. He now has nothing less than a sworn duty to uphold the freedom of speech to a certain degree, especially when relying on algorithms that are unlikely to cut the mustard on its current track. It now also opposes the part the Atlantic gave us with the subtitle “The company has no legal obligation to be balanced—and lawmakers know it” showing Sheryl Sandberg in a ‘who gives a fuck‘ pose. You see, at present Facebook has over 1.7 billion active users. What is interesting is that the acts that he has been found guilty of acts that negatively impacts well over 50% of his active user base. Norway might be small, but he is learning that it packs a punch, and when we add India to the mix, the percentage of alienated people by the censoring act of Facebook goes up by a lot. So even as there is the use of blanket rules, the application is now showing to be more and more offensive to too many users and as such this level of censorship could hurt the bottom dollar that every social media site has, which are the number of users. So as Mark Zuckerberg is trying to get appeal in Asia, he needs to realise that catering to one more nation could have drastic consequences to those he think he has. Now we understand that there needs to be some level of censorship, yet the correct application of it seems to go the wrong way. Of course this could still all go south and we would have get used to log in to 顔のブック, or 脸书. Even चेहरे की किताब is not out of the question. So is that what Zuckerberg needs? I know the US is scared shitless in many ways when that happens, so perhaps overseeing a massive change into the world of censoring is now an important issue. Espen Egil Hansen said it nearly all when he stated “a troubling inability to “distinguish between child pornography and famous war photographs”, as well as an unwillingness to “allow space for good judgement”” is at the heart of the matter. In that regard, the issue of “routinely suppressing conservative news” remains the issue. When you censor 50% of your second largest user base, it is no longer just a case of free speech or freedom of expression. It becomes an optional case of discrimination, which could have even further extending consequences. Even as we sit now, there are lawsuits in play, the one from Pamela Geller, a person that only seems to be taken serious by Breitbart News is perhaps the most striking of all. Pamela (At http://www.breitbart.com/tech/2016/07/13/pamela-geller-suing-facebook/) with the quote “My page “Islamic Jew-Hatred: It’s In the Quran” was taken down from Facebook because it was “hate speech.” Hate speech? Really? The page ran the actual Quranic texts and teachings that called for hatred and incitement of violence against the Jews.” is a dangerous one. It is dangerous because it is in the same place as the Vietnam photo. The fact that this is a published religious book makes it important and the fact that the book is quoted makes it accurate. The blaze (at http://www.theblaze.com/stories/2016/01/05/an-israeli-group-created-fake-anti-israel-and-anti-palestinian-facebook-pages-guess-which-one-got-taken-down/) goes one step further and conducted an experiment. The resulting quote is “The day the complaint was filed, the page inciting against Arabs was shut down. The group received a Hebrew language message from Facebook that read, according to a translation via Shurat HaDin, “We reviewed the page you reported for containing credible threat of violence and found it violates our community standards”, the page inciting against Jews was left active.” This indicates that Facebook has a series of issues. One cannot help but wonder whether this issue is merely bias or the economic print the Muslim world has when measured against a group of 8 million Israeli’s or perhaps just the population of 16 million Jews globally. With the Aftenposten event, Facebook seems to have painted itself into a corner, and if correct several lawsuits that could soon force Facebook to have a rigorous evaluation and reorganisation of several of its internal and external departments.

Because if Content is the cornerstone of Social media, the need to keep a clear view of freedom of expression and freedom of speech becomes even more important. In a product that seeks the need for growth that should have been obviously clear.

There is however a side that is not addressed by any. You might get the idea when you see the Guardian quote “News organizations are uncomfortably reliant on Facebook to reach an online audience. According to a 2016 study by Pew Research Center, 44% of US adults get their news on Facebook. Facebook’s popularity means that its algorithms can exert enormous power over public opinion“, the fact that Facebook might soon be hiding behind the ‘algorithms‘ as we see Facebook go forward on a defence relying on their version of the DEFAMATION ACT. In this example I will use the DEFAMATION ACT 2005 (Australian Law), where we see in Article 32

32 Defence of innocent dissemination
(1) It is a defence to the publication of defamatory matter if the defendant proves that:
(a) the defendant published the matter merely in the capacity, or as an employee or agent, of a subordinate distributor, and
(b) the defendant neither knew, nor ought reasonably to have known, that the matter was defamatory, and
(c) the defendant’s lack of knowledge was not due to any negligence on the part of the defendant.

(2) For the purposes of subsection (1), a person is a “subordinate distributor” of defamatory matter if the person:

(a) was not the first or primary distributor of the matter, and
(b) was not the author or originator of the matter, and
(c) did not have any capacity to exercise editorial control over the content of the matter (or over the publication of the matter) before it was first published.

By relying on Algorithms, Facebook could now possible skate the issue, yet this can only happen if certain elements fall away, in addition, the algorithm will now become part of the case and debate muddying the waters further still.

Hanson does hit the nail on the head when it comes to the issues he raises like “geographically differentiated guidelines and rules for publication”, “distinguish[ing] between editors and other Facebook users,” and a “comprehensive review of the way you operate”. He is not wrong, yet I have to raise the following

In the first, when you decide to rely on “geographically differentiated guidelines and rules for publication”, you also include the rules of who you publish to. This is the first danger for Facebook, their granularity could fall away to some extent and Facebook advertising is all about global granularity. It is a path he would be very unwilling to skate. Open and global are his ticket to some of the largest companies. When this comes into play, smaller players like Coca Cola and Mars could soon find the beauty of moving some of their advertisements funds away from Facebook and towards Google AdWords. I am decently certain that Google will not be opposing that view any day soon.

In the second “distinguish[ing] between editors and other Facebook users” is only part of the path, you see when we start classifying the user, Facebook could start having to classify a little too much, making any distinguishing of such kind additional worries in regards to discrimination. Twitter faced that mess recently when a certain picture from one Newspaper was allowed and another one was not. That and the fact that a woman named Molly Wood (her actual name) was not allowed to use her name as her Facebook name, which is a matter for another day.

In the third the issue “comprehensive review of the way you operate” which is very much in play. The cases that Facebook has faced regarding content and privacy are merely the tip of the iceberg. We can all agree that when it is about sex crimes people tend to notice it, I am speculating for the most because of the word ‘sex’. So when I saw that there is a June reference (at http://www.mrctv.org/blog/facebook-censuring-international-stories-about-rapes-muslim-refugees), when Facebook removed a video from Ingrid Carlqvist for the Gatestone Institute, where she reports that there has been a 1,500% increase in rapes in Sweden, I was wondering why this had not found the front page of EVERY newspaper in every nations where there is free speech. The Gatestone Institute is a not-for-profit international policy think tank run by former UN Ambassador John Bolton, so not some kind of radicalised front.

In that regard is any kind of censoring even acceptable?

This case is more apt than you think when you consider the quote we see, even as I cannot give weight to the publishing site. We see “Facebook may have been incited to censor this story by a new European Union push in cooperation with Facebook, Twitter, and Google to report incidents of racism or xenophobia to the authorities for criminal prosecution” with the by-line “In order to prevent the spread of illegal hate speech, it is essential to ensure that relevant national laws transposing the Council Framework Decision on combating racism and xenophobia are fully enforced by Member States in the online as well as the in the offline environment. While the effective application of provisions criminalising hate speech is dependent on a robust system of enforcement of criminal law sanctions against the individual perpetrators of hate speech, this work must be complemented with actions geared at ensuring that illegal hate speech online is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame. To be considered valid in this respect, a notification should not be insufficiently precise or inadequately substantiated“, which was followed by “No matter why Facebook decided to remove Ingrid Carlqvist’s personal page, it doesn’t lessen the fact that this is another example of their political censorship, and their desire to place political correctness over freedom of the press and freedom of expression

Now this part has value and weight for the following reason: When we consider the earlier move by Facebook to relay on algorithms, the European Commission (at http://europa.eu/rapid/press-release_IP-16-1937_en.htm) gives us: ‘is expeditiously reviewed by online intermediaries and social media platforms, upon receipt of a valid notification, in an appropriate time-frame‘, which could imply that an algorithm will not be regarded as one of the online intermediaries, which means that the human element remains and that Facebook cannot rely on the innocent dissemination part of the Defamation Act, meaning that they could end up being in hot water in several countries soon enough.

As parting words, let Facebook take heed of the words of Steven Spielberg: “There is a fine line between censorship and good taste and moral responsibility“.

Leave a comment

Filed under IT, Law, Media, Politics, Religion

Censoring – Censor out?

It is 18:11 and my assignments are done. I get one day of rest until the next batch off assignments start to twitch at the corner of my desk. No rest for the weary, so off to the Guardian I went a moment ago only to see an interesting article by James Ball. It is about Twitter. The headline ‘Twitter: from free speech champion to selective censor?‘ pretty much states it all (at http://www.theguardian.com/technology/2014/aug/21/twitter-free-speech-champion-selective-censor).

It starts with a quote that sounds good, but is actually a statement of quicksand “The social network’s decision to remove all links to the horrific footage showing the apparent beheading of the photojournalist James Foley is one that most of its users, reasonably, support“.
I actually do not support it, but I understand the action. Why not?

Well, this is all about emotion, which is fair enough, but Twitter had given themselves a precedent of censoring. Now, let us be honest, I have nothing against the censoring, but they created a position for themselves that will drain resources in many way.

Why? What about the next beheading or execution that comes next? Other video smut we can all do without. Where will it stop and how can it be managed?

James Ball actually words an interesting view I had not considered when he states “the New York Post and New York Daily News’ decision to use graphic stills from the footage as their front-page splashes. Here begin the problems for Twitter: the network decided not to ban or suspend either outlet for sharing the images – despite banning other users for doing the same“, which constitutes discrimination. So, as I stated, Twitter entered a pool of quicksand and it will get them deeper into trouble sooner then they realise. That is shown with the quote “Twitter is absolved of legal responsibility for most of the content of tweets. But by making what is in essence an editorial decision not to host a certain type of content, Twitter is rapidly blurring that line“.

So under Common Law, Twitter got themselves in quicksand and hot water all at the same time (aren’t they the efficient Eager Beavers?).

If I go by the NSW Defamation Act 2005, we see a nice escalation in section 32, where it states:

Section 32   Defence of innocent dissemination

(1)  It is a defence to the publication of defamatory matter if the defendant proves that:
(a)  the defendant published the matter merely in the capacity, or as an employee or agent, of a subordinate distributor, and
(b)  the defendant neither knew, nor ought reasonably to have known, that the matter was defamatory, and
(c)  the defendant’s lack of knowledge was not due to any negligence on the part of the defendant.

(2)  For the purposes of subsection (1), a person is a subordinate distributor of defamatory matter if the person:
(a)  was not the first or primary distributor of the matter, and
(b)  was not the author or originator of the matter, and
(c)  did not have any capacity to exercise editorial control over the content of the matter (or over the publication of the matter) before it was first published.

Until now, they had gotten a clean pass and would remain to have one until they made the change they did. Because whomever starts any defamation case, will have cause to show the beheading censoring instance of James Foley and by Twitter acting, they gave away the defence: ‘did not have any capacity to exercise editorial control over the content of the matter‘, because they just did that exact thing, which now gives them cause to see Defence of innocent dissemination melt away like snow in the sunshine.

As James Ball points out, the issue that I had taken offense to last year were the threats against Caroline Criado-Perez, who thought it would be a great idea if Jane Austen became the new face of the 10 pound note. I personally thought it was a brilliant idea. Some small minded people did not and as such, she got a dose of abuse and threats that were completely beyond belief. It is only one of many cases of bullying, trolling and harassment via Twitter. The quote we see in the Guardian is: “Twitter’s strongest, perhaps only, justification for its sluggish and minimal response was that it could only act through its harassment channels, and could not become a curator or editor of content on its site“, which in itself is perfectly acceptable, yet now, they have given that option away by acting and soon, Twitter might be confronted with other abuse and threat victims and as such their goose gets to be decently cooked (and broiled).

So, either Twitter takes a step back, which would be fair enough, or it becomes a policing entity, which might not be the worst, yet the issues from this choice will haunt them for a long time to come. That in itself seems unfair, but just moving to the plate (not arguing how justified it is), will leave them with bruises and scars. I get the issue that it is a consequence of choice, which I do not attack, but how consistent can they actually do this and more important, what issues will they open when they censored something that was lost in translation, how will they fix those mistakes at that point?

I think that they should state that the beheading intervention was a once off and not interfere again. Not because I want it, but because Twitter seems safer by remaining on the side of innocent dissemination, a side that they might not be regarded as ever again (speaking juridical), simply because the action has already taken place.

So is the censor in for censoring?

That is a question that only Twitter can answer, yet the emotional decision to intervene in this case was morally right, emotionally correct and decently good, this jurisprudential mouse will however end up having a slightly too long tail, I wonder whether Twitter considered that option, especially in regards to victims like Caroline Criado-Perez who did not get the intervening attention they rightfully deserved.

 

Leave a comment

Filed under IT, Law, Media