Tag Archives: Molly Russell

News, fake news, or else?

Yup that is the statement that I am going for today. You see, at times we cannot tell one form the other, and the news is making it happen. OK, that seems rough but it is not, and in this particular case it is not an attack on the news or the media, as I see it they are suckered into this false sense of security, mainly because the tech hype creators are prat of the problem. As I personally see it, this came to light when I saw the BBC article ‘Facebook’s Instagram ‘failed self-harm responsibilities’’, the article (at https://www.bbc.com/news/technology-55004693) was released 9 hours ago and my blinkers went red when I noticed “This warning preceded distressing images that Facebook’s AI tools did not catch”, you see, there is no AI, it is a hype, a ruse a figment of greedy industrialists and to give you more than merely my point of view, let me introduce you to ‘AI Doesn’t Actually Exist Yet’ (at https://blogs.scientificamerican.com/observations/ai-doesnt-actually-exist-yet/). Here we see some parts written by Max Simkoff and Andy Mahdavi. Here we see “They highlight a problem facing any discussion about AI: Few people agree on what it is. Working in this space, we believe all such discussions are premature. In fact, artificial intelligence for business doesn’t really exist yet”, they also go with a paraphrased version of Mark Twain “reports of AI’s birth have been greatly exaggerated, I gave my version in a few blogs before, the need for shallow circuits, the need for a powerful quantum computer, IBM have a few in development and they are far, but they are not there yet and that is merely the top of the cream, the icing on the cake. Yet these two give the goods in a more eloquent way than I ever did “Organisations are using processes that have existed for decades but have been carried out by people in longhand (such as entering information into books) or in spreadsheets. Now these same processes are being translated into code for machines to do. The machines are like player pianos, mindlessly executing actions they don’t understand”, and that is the crux, understanding and comprehension, it is required in an AI, that level of computing will not now exist, not for at least a decade. Then they give us “Some businesses today are using machine learning, though just a few. It involves a set of computational techniques that have come of age since the 2000s. With these tools, machines figure out how to improve their own results over time”, it is part of the AI, but merely part, and it seems that the wielders of the AI term are unwilling to learn, possibly because they can charge more, a setting we have never seen before, right? And after that we get “AI determines an optimal solution to a problem by using intelligence similar to that of a human being. In addition to looking for trends in data, it also takes in and combines information from other sources to come up with a logical answer”, which as I see is not wrong, but not entirely correct either (from my personal point of view), I see “an AI has the ability to correctly analyse, combine and weigh information, coming up with a logical or pragmatic solution towards the question asked”, this is important, the question asked is the larger problem, the human mind has this auto assumption mode, a computer does not, there is the old joke that an AI cannot weigh data as he does not own a scale. You think it is funny and it is, but it is the foundation of the issue. The fun part is that we saw this application by Stanley Kubrick in his version of Arthur C Clarke’s 2001: A Space Odyssey. It is the conflicting part that HAL-9000 had received, the crew was unaware of a larger stage of the process and when the stage of “resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission”, which has the unfortunate part that Astronaut Poole goes the way of the Dodo. It matters because there are levels of data that we have yet to categorise and in this the AI becomes as useful as a shovel at sea. This coincides with my hero the Cheshire Cat ‘When is a billy club like a mallet?’, the AI cannot fathom it because he does not know the Cheshire Cat, the thoughts of Lewis Carrol and the less said to the AI about Alice Kingsleigh the better, yet that also gives us the part we need to see, dimensionality, weighing data from different sources and knowing the multi usage of a specific tool.

You see a tradie knows that a monkey wrench is optionally also useful as a hammer, an AI will not comprehend this, because the data is unlikely to be there, the AI programmer is lacking knowledge and skills and the optional metrics and size of the monkey wrench are missing. All elements that a true AI can adapt to, it can weight data, it can surmise additional data and it can aggregate and dimensionalise data, automation cannot and when you see this little side quest you start to consider “I don’t think the social media companies set up their platforms to be purveyors of dangerous, harmful content but we know that they are and so there’s a responsibility at that level for the tech companies to do what they can to make sure their platforms are as safe as is possible”, as I see it, this is only part of the problem, the larger issue is that there are no actions against the poster of the materials, that is where politics fall short. This is not about freedom of speech and freedom of expression. This is a stage where (optionally with intent) people are placed in danger and the law is falling short (and has been falling short for well over a decade), until that is resolved people like Molly Russell will just have to die. If that offends you? Good! Perhaps that makes you ready to start holding the right transgressors to account. Places like Facebook might not be innocent, yet they are not the real guilty parties here, are they? Tech companies can only do so such and that failing has been seen by plenty for a long time, so why is Molly Russel dead? Yet finding the posters of this material and making sure that they are publicly put to shame is a larger need, their mommy and daddy can cry ‘foul play’ all they like, but the other parents are still left with the grief of losing Molly. I think it is time we do something actual about it and stop wasting time blaming automation for something it is not. It is not an AI, automation is a useful tool, no one denies this, but it is not some life altering reality, it really is not.

Leave a comment

Filed under IT, Law, Media, Politics, Science

The A-social network

That is a stage, it is a big stage and it does not care whether you live of whether you die. So let’s take this to a new level and start with a question: ‘When did you last cause the death of a person?’ I do not care whether it is you mum, your dad, your partner, your child. When did you cause their death? Too direct? Too Bad!

You see, we think that we are innocent, some are risk programmers into debt insolvency programs, yet there it is not about the people, it is about the business that needs maximisation. We pride ourself in compartmentalisation, yet in the end the programmer is just as efficient a murderer as the sniper is. When I look through the sight of a .308 rifle, the sight allows me to go for a target 450 metres away, an optimum distance, the silencer will make is silent enough so that anyone more than 4 metres away will not hear a thing and 450 metres away, a person falls to their knees, the chest wound is damaging enough to ensure that the target will be dead on arrival, even if it happens at the entrance of a hospital, for the target it is over. You think this is bad? 

The programmer writes the formula that sets a different strain of insolvency. It is a form of credit risk, as such we get “In the first resort, the risk is that of the lender and includes lost principal and interest, disruption to cash flows, and increased collection costs”, as such the credit firms hire programmers that can stretch the case to lower the risk to the lender, set the stage where there is an increased option to pay back at much higher cost. In that same way we see programs and risk assessments being created where the facilitators are not at risk, they are not to blame and they are not to be held accountable. 

So here comes Molly Russell and the BBC gives us ‘Molly Russell social media material ‘too difficult to look at’’, it starts with “The 14-year-old killed herself in 2017 after viewing graphic images of self harm and suicide on the platform”, so what ‘platform’ was that? How much was viewed and what time frame was in play? These are the first questions that rise straight from the bat. It is followed by “A pre-inquest hearing on Friday was told not all the material had been studied yet as it was too difficult for lawyers and police to look at for long”, basically at least two years later lawyers and police are unable to view what a 14 year old did, and this does not give us the hard questions? So whilst the article (optionally unintentionally) hides behind “The inquest will look at how algorithms used by social media giants to keep users on the platform may have contributed to her death”, the basic flaw is at the very basic level. How did this stuff get uploaded, why was it not flagged and hw many viewed it, in addition towards the small setting of who was the uploading party? So someone gave a 14 year old the settings and the access to materials that most adults find unwatchable and I think there are bigger questions in play. It is the line “He added certain parts of the material had been redacted and lawyers and police were trying to find out why”, as I personally see it, redaction happens when you need to hide issues and this becomes an increased issue with “the investigation was seeking the cooperation of Snapchat, WhatsApp, Pinterest, Facebook and Twitter, although until recently only Pinterest had co-operated fully”, as well as “Snapchat could not disclose data without an order from a US court, WhatsApp had deleted Molly’s account and Twitter was reluctant to handover material due to European data protection laws, the hearing was told”, On a personal footnote, Twitter has been on a slippery slope for some time, and the deletion by WhatsApp is one that is cause for additional questions. As I see it, these tech giants will work together to maximise profit, but in this, is the death of a person the danger that they cannot face, or will not face in light of the business setting of profit? Even as I am willing to accept the view of “Coroner Andrew Walker said “some or all” of those social media companies could be named as interested parties in the inquest as they would be “best placed” to give technical information for the case”, are they best placed or are we seeing with this case the setting where Social media is now the clear and present danger to the people for the case of extended profits into the largest margin available?

That is a direction you did not see, is it?

We have never seen social media as a clear and present danger, but in case of Molly Russell that might be exactly what we face and there is every indication that she is not the only case and it is possible that the redactions would optionally show that.

Yet in all this, the origin of the materials and how they were passed through social media remains a much larger issue. I wonder how much the inquest will consider that part. You see, for me, I do not care. I am sorry, the picture of the girl in the BBC article is lovely, she is pretty, but I do not care. It is cold, yet that is what it is. In Yemen well over 100,000 are dead and the world does not seem to care, as such, I need not care about one girl, but the setting, the setting I do care about. It is not for the one case, under 5G when the bulk of the people will get drowned in information and all kinds of movies, one girl will end up being between 8 and 20 people. The setting is larger, 5G will make it so ad if you doubt that, feel free to wait and watch the corpses go by.

Suddenly sniping seems such a humanitarian way to pass the time, does it not? 

We need to consider that one process influences another, as such the process is important, just like the processes risk assessors write to lower risk, the stage of what goes one way, also has the ability to go the other way. This translates into ‘What would keep Molly Russell with us?’ Now implies a very different thing, it sets the stage of a lot more. It is not merely who messaged Molly Russell, it becomes what else was send to Molly Russell on WhatsApp, so suddenly the deletion of her account does not seem that innocent, does it? It goes from bad to worse when you consider on how social media links and how links and usage is transferred. Like footprints the links go form one to the other and no one has a clue? It is in my personal view more likely that they all have a clue and for the most it is extremely profitable, Molly Russell is merely a casual situation of circumstance, so under 5G when it is not 1, but up to 20 times the victims, what will happen then?

I will let you consider that small fact, the setting where your children become the casualty of margins of profit, until death deletes the account, have a great day!


Leave a comment

Filed under IT, Law, Media, Politics, Science

The way of cowards

This is not the first message we see in the news and it will not be the last. We see the everlasting rumble of facilitation and the need to sweep under the carpet the actions of others and never holding them to account. Last week many in the UK were given ‘Instagram bans ‘graphic’ self-harm images after Molly Russell’s death‘, the article (at https://www.theguardian.com/technology/2019/feb/07/instagram-bans-graphic-self-harm-images-after-molly-russells-death) gives us a scenario that should kick us all into action, yet not in the way that some believe is the right one.

Even as we saw: “After days of growing pressure on Instagram culminated in a meeting with health secretary Matt Hancock, the social network’s head Adam Mosseri admitted that the company had not done enough and said that explicit imagery of self-harm would no longer be allowed on the site“, we should be angered by the words of Adam Mosseri, yet we are not. The image in this is not as simple as it is given, but it should be. 2 days ago we see ‘Instagram urged to crack down on eating disorder images‘ (at https://www.theguardian.com/technology/2019/feb/08/instagram-urged-to-crack-down-on-eating-disorder-images) where the quote: “The Guardian has discovered thousands of hashtags and accounts promoting anorexia, including diaries of weight loss, alarming pictures and comments on goal weights“, we get the advice “Please don’t report, just block,” and that is also the first path where the solution is found. It should instantly apply to Instagram, Facebook, Twitter and all other forms of social media.

The simple solution

You as the poster are responsible for the content you post, you can be prosecuted and sued if need be, if a case goes to court all data and information of the account, as well as its posting history will be made available to the prosecuting parties. You are responsible for the created account and the content posted through it.

It is this simple; those who are on that path of chaos and anarchy must bear the responsibilities of the impact. No matter your age ‘I did not know’ is not a valid defence in court. Your life over, no tertiary education (the fast food industry always needs fresh blood).

It is time that we stop facilitating to social media to grow their numbers any way they can, even as the death of Molly Russell is out now, we need to realise that the matter is worse than: “But critics said the changes should have already been made and remained skeptical they would be enough to tackle a problem that some said has grown unchecked for 10 years“, political inaction and facilitation are a direct cause here and it is time to stop fretting and apply every brake we can. The measure ‘including the removal of non-graphic images of self-harm‘, the poster needs to be dealt with, In case of self-harm it might have meant that the proper people talked to Molly Russel immediately, which now implies that Molly Russel could have been alive today if action had been taken earlier. Those who posted fake alerts might find themselves prosecuted, their equipment seized and they can revert to spending hours reading, their library card giving a clear “no internet access” part. There needs to be a price for the damage inflicted. The response ‘I thought it was fun!‘ will not hold water, we have given enough leeway for the longest of times and we need to realise that the parents are often not blameless either.

Dangerous message!

So as we are given: “young people also faced being confronted with pro-anorexia images” we need to be extra alarmed. So when we are confronted with that slogan, how can this be seen as “an ascetic Journey“? If we look at ascetic we see “characterized by severe self-discipline and abstention from all forms of indulgence, typically for religious reasons“, yet most of the younger people will have considered that they meant aesthetic which means “concerned with beauty or the appreciation of beauty”, what I would call miscommunication through words that sound alike. You see, ”abstention from all forms of indulgence“, does not include do not eat what your body requires to stay healthy, because the message bringer was pretty clear of remaining in the dark to what constitutes indulgence, and whilst we see: the element of “more than is good for you” to be ignored, we see the sliding scale of danger towards that persons health. So even if we agree with “There is a social obligation and whether there is also an industry obligation is an important point that is coming out at the moment as well.” We see that in the end, the poster is not held to account and whilst we look at the statement of images, it is clear that there is every change that the slogan is kept online, which is more dangerous as slogans can become meme’s in the mind of the troubled person hammering second after second until it grabs hold in daily life. The damage is done!

When we set into law the prosecution of the poster, we also see a first step into resolving the state of cyber-bullying, these cowards are hiding in the shadows, feeling that they have fun, yet when the data becomes available for prosecution as they can no longer delete their activities, we see the impact of their fear reversed, we enable the bullied to go after those bullies. These people will now step into the spotlight and they tend to not like it at all.

All elements solved by properly holding the poster to account and that is what most social media fear, because when accountability comes into play posts decline by well over 30% and that is the fear of social media, to be made responsible is also to be made less flammable and social media grows with every online flame, it is a consequence of participation and when there is an emotional flame everyone wants to participate and have their say in it all.

It is Jade (19) who gives us more in the Guardian, who at age 11 engaged in “When my eating disorder and depression were at their worst, I scoured apps like Instagram to find these images which only worsened my self-image. At this time the posts were few and far between. Clearly the amount of images is now vast across almost all social media platforms,” Now we can understand that this is not the fault of social media that people ignore age requirements, yet this is the common issue that has been around for too long, so when we see “It isn’t only Instagram that is riddled with these potentially distressing images, sites or apps like Tumblr, Pinterest and Weheartit are also full of these posts.” we see the stage where the poster needs to be held to account, we see the stage that has been avoided for a decade and all the players know that they have been avoiding the stage. Now there is a new trend, the image of cutting, even as some sources are about the dream, about: “Cutting oneself indicates family problems“, it is now linked in several ways to self-harm and as such the picture becomes less and less transparent to resolve, yet the first option, hold the poster to account is still there and this path has been avoided for close to a decade, the question becomes why?

Age is no longer a valid point, the transgressors had no issues lying about their age, as such they need to directly feel the impact as they throw away their lives, it puts them and their parents in the picture, it needs to become about this as overworked parents all rely on giving their child a tablet or mobile as a toy so that they can be quiet as they are too exhausted, all replacement towards the failure of raising a child (in some cases). In other cases it is the lack of discipline and peer pressure, it has to stop, holding the poster to account has become an essential first step. There is a secondary need to do this, we see in some parts of the world how social media is used to spread extremism (Indonesia), how long until they start looking for tools to do their work for them? How long until we start seeing the impact of “extremist network Jamaah Ansharut Daulah (JAD), which has pledged allegiance to Islamic State (IS)“, via a fictive 17 year old boy named Kevin living in Springfield (IL) or Richmond (Vi)? He’ll tell you that they gave him a cool video game for promoting and retweeting something he could not read, and his classmates all did the same because Kevin got a really cool video game, that was money in the bank. For the JAD in the end it would have been money in the bank all that visibility for $59 (plus shipping), Google Ads could not have given them a better deal ever. The federal investigation teams will unable to untangle that mess for months, the perpetrators will have moved on weeks before.

That is how I see it!

We need to change gears on all social media fronts and holding the poster to account is a first step. To remove dangers form people like Molly Russell is a first, but it goes beyond that. Even when we see the sceptical foundation of: “Speaking on BBC Radio 4’s PM programme, the digital minister, Margot James, said the government would “have to keep the situation very closely under review to make sure that these commitments are made real – and as swiftly as possible”” people like Margot James and her various international counter parts need to realise that it is way too late for ‘keep the situation very closely under review‘, it is over half a decade too late already, we need to change gears and make a first step towards holding posters accountable for what they post, when it results in fatalities a freedom of expression will not hold water and even if the court decides to do just that, the people have a right to know who that poster was. It gets to be even worse when we consider the factor that Apple played in all this. Their part is less easy to see because privacy is set and at times privacy is just that nobody’s business, yet when it results in the death of a 14 year old and it was a cyberbully that was behind it all? Should Apple be allowed to protect the identity of the murderer? It is not an easy matter and some drawers should justifiably be kept closed, yet the image still remains and that too is a moment where the poster could have been held accountable and holding them to account might have stopped a worse matter earlier on, it was not to be the case.

I believe that dozens of lives could have been saved if political players had acted a lot earlier and a lot more decisive.


Leave a comment

Filed under Media, Politics