Tag Archives: IBM

Media rigging

We have had issues, massive issues for the longest of times. Now we can focus on the blatant transgressors, we can focus on the exclusion examples of good journalism like the guardian, the Independent, the NY Times, the Washington Post, the Times and the Financial Times (the Australian and non-Australian editions), yet the founding flaw is actually larger.

You see, journalism has become an issue in itself. Whatever people and participators thought it was in the 70’s is no longer the case. Perhaps it never was. In my view, journalism is no longer merely about ‘exposing’, it is about partially revealing, whilst mediating the needs of the shareholder, the stake holders and the advertisers making it a very different issue. It is there where I did not just have my issue with Microsoft, in that same setting the hands of Sony are equally tainted. They are the two visible ones; but that list is distinguished and very long. So as we see overcompensation we see it on both sides of the equation, not giving it a level of equilibrium, but an exaggerated level of grossly unsettling.

In this we have two articles. The first is directly linked to what I have been writing about so let’s start with that. The Washington Post (at https://www.washingtonpost.com/news/the-switch/wp/2018/04/16/thousands-of-android-apps-may-be-illegally-tracking-children-study-finds) gives us ‘Thousands of Android apps may be illegally tracking children, study finds’. Now, I am not convinced that this is all limited to Android, but that is a personal feeling that has not been met with in-depth investigation, so I could most certainly be wrong on that count. What is the issue is seen with “Seven researchers analyzed nearly 6,000 apps for children and found that the majority of them may be in violation of the Children’s Online Privacy Protection Act (COPPA). Thousands of the tested apps collected the personal data of children under age 13 without a parent’s permission, the study found“,as this had been going on for years and i reported on it years ago, I am not at all surprised, yet the way that this now reaches the limelight is an issue to some degree. I am unaware what Serge Egelman has been doing with their life, but “The rampant potential violations that we have uncovered points out basic enforcement work that needs to be done” was not a consideration in 2010, or 2009, so why is it an issue now? Is it because Osama Bin Laden is dead now (intentionally utterly unrelated)? There has been a freedom of actions, a blatant setting of non-investigation for close to a decade and even as it is now more and more clear that the issue was never ‘not there’. In February 2016 we saw (unfortunately through the Telegraph) “The security flaw in Fisher-Price’s Smart Toy Bear meant access to a child’s name, date of birth and gender could have been easily accessed. The researchers at Rapid7, a Boston-based security company that spotted the defect, said the toy could also be hijacked to give a malicious actor control over account data and in-built functions“, so this is not new. The fact that it was the Telegraph who brought it does not make it false. And yes, I did bite my tongue to prevent the addition of ‘in this case‘ to the previous line. In addition we see (at http://www.dickinson-wright.com/news-alerts/legal-and-privacy-issues-with-connected-toys) that law firm Dickinson Wright has been on the ball since 2015, so how come that the media is lagging to such an extent? Like me, they saw the rain come and in their case it is profitable to be aware of the issues. So with “Since 2015 the technology and legal implications regarding these types of toys has only grown as the market now includes smart toys, such as Talk-to-Me Mikey, SmartToy Monkey, and Kidizoon Smartwatch DX; connected toys, such as SelfieMic and Grush; and other connected smart toys such as Cognitoys’ DINO, and My Friend Cayla“, they again show to be ahead of the curve and most of the media lagging to a much larger degree. Did you think that this was going to go away by keeping quiet? I think that the answer is clearly shown in the Post article. The most powerful statement is seen with “The researchers note that Google has worked to enforce COPPA by requiring child app developers to certify that they comply with the law. “However, as our results show, there appears to not be any (or only limited) enforcement,” the researchers said. They added that it would not be difficult for Google to augment their research to detect the apps and the developers that may be violating child privacy laws“, in this we see two parts, and the first is that the call of data value tends to nullify ethics to a much larger degree. The second is that I do not disagree with ‘it would not be difficult for Google to augment their research‘, I merely think that the people have not given Google the rights to police systems. Can we hold Microsoft responsible for every NBA gave that collects the abilities of users on that game? Should Microsoft police Electronic Arts, or 2K for that matter? The ability does not imply ‘to have the right’. Although it is a hard stance to make, we cannot go from the fact that all software developers are guilty by default, it is counterproductive. Yet in that same light, those transgressors should face multi-million dollar fines to say the least.

The final quote is a good one, but also a loaded one. With “Critics of Google’s app platform say the company and other players in the digital-advertising business, such as Facebook, have profited greatly from advances in data-tracking technology, even as regulators have failed to keep up with the resulting privacy intrusions” there is a hidden truth that also applies to Facebook. You see, they merely facilitate to give the advertiser the best value of their advertisement (like AdWords), yet the agency of advertiser only benefits from using the system. Their ad does get exposed to the best possible audience, yet the results they get back in AdWords is totally devoid of any personal data. So the advertiser sees Gender, age group location and other data, but nothing that personally identifies a person. In addition, if the ad is shown to an anonymous browser, there will be no data at all for that case.

So yes, data-tracking gives the advantage, but the privacy intrusions were not instigated by either Google or Facebook and as far as I know AdWords does not allow for such intrusions, should I be wrong than I will correct this at the earliest opportunity. Yet in all this, whilst everyone is having a go at Facebook, the media is very much avoiding Cambridge Analytica (minus one whistle-blower), other than to include them in speculations like ‘Cambridge Analytica appears to have an open contract‘, ‘Was it Cambridge Analytica that carried the day for Kenyatta‘ and ‘could have been shared with Cambridge Analytica‘. It almost reads like ‘Daily Mail reporter Sarah Vine might possibly have a vagina‘, which brings us to the second part in all this.

Invisibly linked

For the first time (I think ever) did I feel for a reporter! It was not what she said or how she said it, it was ‘Daily Mail fires reporter who inadvertently published obscenity‘ (at https://www.theguardian.com/media/2018/apr/16/daily-mail-removes-obscene-language-attack-on-reality-tv-stars). Now it is important that we consider two parts. the first is the blatant abuse of ‘political correctness‘ which has been putting the people at large on their rear hooves for way too long, which might also be the reason why comedians like Jimmy Carr are rising in popularity in a way we have not seen since Aristophanes wrote The Frogs in 435BC. My issue starts with “Daily Mail Australia has fired a reporter who accidentally uploaded her own “musings” about reality television contestants being “vapid cunts” on to the news website on Sunday“, so the Daily Mail does not have a draft setting that needs to be approved by the editor, no, it gets uploaded directly and even as that might be commendable. The fact that we also see “Sources at the Daily Mail earlier said the young reporter was “mortified” by the mistake“, whilst the lovers of the TV-Series Newsroom saw a similar event happen in 2014, so the fact that reality catches up with comedy and TV-Series is not merely fun, the fact that this happened in the heralded ‘Newsroom‘ should be seen as a signal. As we see “The Daily Mail reporter was writing in a Google document because of problems with the content management system and she inadvertently cut and pasted a paragraph about Bachelor in Paradise contestant Florence Alexandra which she says was written for her own eyes only, Guardian Australia understands” it is not merely about the fact on who wrote it, the mere part that the content manager part was flawed, we also see “The reporter had filed no fewer than five stories on Sunday and four on Monday, which is a normal workload for a Daily Mail journalist. It is customary for Mail reporters to upload their own copy into the system unless the story is legally contentious“. So even as we accept that the pressure is on, the system was flawed and that there was a lot of truth in her writing, and all this about a Dutch model whose fame seems to be limited to being ‘not ugly‘. So as the Daily Mail was happy to get her bum-shot and label it ‘wardrobe malfunction’ (9th September 2017), whilst in addition there has been no other transgressions, she was quite literally thrown to the wolves and out of a job. So when we do see the term ‘vapid cunts‘ (with the clever application of ‘vapid’, did the editorial consider that the term might have meant ‘a bland covering of the green envious setting of finding love and overcoming rejection‘, which we get from ‘vapid=bland‘ and ‘vagina = a sheath formed round a stem by the base of a leaf‘.

You see, in the end, this is a paper covering a reality show, a fake event created to entice an audience from living a life and wasting an hour on seeing something fake whilst they could have sought it out for real. In all this the overworked journalist gets the axe. So even if I feel a little for the journalist in this case and whilst we see that the audience replied with ‘Refreshing honesty from the Daily Mail this morning‘, which should be a real signal for the editor in change, no he threw it all out to hopefully avoid whatever would come next.

You see, even if it is not now, there are enough issues around which means that Leveson 2 might be delayed, but will still most likely happen. So even as the Telegraph is already on the ‘would be a threat to a free press‘, whilst trying to drown the reader with ‘The first Leveson inquiry cost taxpayers £5.4 million, yet the legal bill for the newspaper industry to comply with the process was far more than that‘, some journalists were up to their old tricks even before the Leveson ink dried. So in this the moment that Leveson 2 does happen, their clean desks will not be because some journalists tried to keep it clean, it will be because they were told to leave. The fact that some see Leveson 2 in relation to ‘undermining high quality journalism‘ seems to forget that high quality journalism is a thing of the past. It perhaps ended long before John Simm decided to portray a journalist in the excellent ‘State of Play‘. In all this there will be a massive blowback for the media at large, the moment it does happen, I will have every intention to get part of it set as an investigation of news that would have been considered as ‘mishandled’. There is at large enough evidence that the Sony event of 2012, the Microsoft events of 2012, 2013, 2014, 2017, as well as IBM 2015 and 2017. There have been too many of events that were somehow ‘filtered’. In addition to that there are not merely the data breaches, the fact that there are strong indications that the media at times, merely reported through the act of copy and paste, whilst not looking deeper into the matter. Tesco, the North Korean Sony ‘Hack’ and a few other matters that should be dug into as there are enough indications that events had faltered and faltered might be seen as the most positive way to define an event that should be seen as utterly negative.

In my view, as some editors and shareholders will try to navigate the term journalist, I would be on the horse of removing that word altogether and have those papers be subject to the full 20% VAT. I wonder how they will suddenly offer to (again) monitor themselves. Like that was a raging success the first time around. It is as I see it the price of not being held to any standards, apart from the overreacting from two unintended words, which is in my view a massive overreaction on several levels. I wonder why that was and who made the call to the editor on that, because I don’t think it was merely an overreacting Dutch model. In that I am decently convinced that she has been called a hell of a lot worse, the side effect of trying to be a ‘social media selfie darling’. Yet that is merely my point of view and I have not always been correct.

 

Leave a comment

Filed under Finance, Law, Media, Politics

Direction X

It is the Columbian (at http://www.columbian.com/news/2018/apr/15/harrop-facebook-wont-alter-its-lucrative-practices-without-regulations/) that gives us a light to work with today. A light that some US congressman and US Senators have been pushing for, so it is fun to have a go at that point of view. Now, do not mistake my opposition to it as a way to invalidate the view. I do not agree with the point of view, but many have it. So I see it as a way to inform the readers on the things that they need to know. Froma Harrop starts with three events. We see:

  • Mark Zuckerberg in 2006: “We really messed this one up. …We did a bad job of explaining what the new features were and an even worse job of giving you control of them.”
  • Zuckerberg in 2010: “Sometimes we move too fast. … We will add privacy controls that are much simpler to use.”
  • Zuckerberg early this year: “It was my mistake, and I’m sorry. … There’s more we can do here to limit the information developers can access and put more safeguards in place to prevent abuse.”

Now, they are valid events, but the dimensionality is missing. With the exception of certain Google products, Facebook has been the biggest evolving platform on a near daily basis, the integration with mobile apps, mobile reporting, stories, clips, annotated pictures, travelling, and so much more. Over a period of 10 years Facebook went from a dynamic page (for each user or group) to a collected omnibus of information available to all their friends. That is a level of growth that even Microsoft has not been able to compete with and in all this, there will always be mistakes. Some small and trivial and some will be bang up monsters of flaws. Compare this to Microsoft who did not push forward with its Xbox360, no it offered for sale a more powerful machine whilst trimming the functionality down by close to 20% (personal projected loss) with the shift from Xbox360 to Xbox One and Xbox One to Xbox One X. A data collecting machine of greed (whilst everyone is ignoring the data that Microsoft is uploading), pushing users like a bully, to do what they wanted the user to do or be left out. So when exactly did Facebook do that to that degree? Sony with its PlayStation at least pushed forward to some degree.

Froma makes a nice case with: “The law will require them to obtain consent for use of personal information in simple language. (Users shouldn’t have to take a night course to understand privacy and security settings.)“, this is nice in contrast to some consoles (like the Sony consoles) who suddenly made it illegal to use second hand games on their consoles in their terms of service, they quietly backed away when it blew up in the faces of Microsoft. In all this, yet with my sense of humour and realising where this article was, it was not without a giggle that I took a look at the Columbia Journal of European Law (at http://cjel.law.columbia.edu/preliminary-reference/2017/mind-the-gap-loopholes-in-the-eu-data-privacy-regime/) where we see “any set of information relating to individuals to the extent that, although the information is not processed by means of equipment operating automatically in response to instructions given for that purpose, the set is structured, either by reference to individuals or by reference to criteria relating to individuals, in such a way that specific information relating to a particular individual is readily accessible“, which now leads to “This language of “specific information [that] is readily accessible” indeed was interpreted by the English courts in a manner conflicting with the Directive. In Durant v. Financial Services Authority, the English and Wales Court of Appeal formulated a two part test to evaluate whether a filing system is caught by the Directive:” and that now leaves us with “(i) [T]he files forming part of [the filing system] are structured or referenced in such a way as clearly to indicate at the outset of the search whether specific information capable of amounting to personal data [] is held within the system and, if so, in which file or files it is held; (ii) [The filing system] has, as part of its own structure or referencing mechanism, a sufficiently sophisticated and detailed means of readily indicating whether and where in an individual file or files specific criteria or information about the applicant can be readily located.

So in that case Froma is left with a piece of paper to be stationed where the sun does not shine and it merely took the case Durant v. Financial Services Authority to show its ‘lack‘ of complexity, or did it? She is right that ‘Users shouldn’t have to take a night course to understand privacy and security settings, it merely took law lord Sir Robin Ernest Auld (a former Lord Justice of Appeal in the Court of Appeal of England and Wales) a hell of a lot more than a night course, more like 25 years on the bench as a lawyer, an elected judge and his ascension to lord justice of the appellant court to get it all figured out.

So as we get that out of the way we also need to look at “The companies will have to notify users of a data break-in within 72 hours of its discovery. They’ll have to give up monopoly control of the personal information; people will have the right to obtain a copy of their data and share it with others“, it took Sony a hell of a lot longer to figure out that they were breached and notify people. So now consider the breaches of Equifax (143 million), eBay (145 million), Yahoo (3 billion) and Target stores (110 million). the implication of alerting that many people is not just weird, it is actually dangerous as people tend to overreact do something stupid and lock their accounts, these 4 events could set the stage for close to 4.5 billion locked accounts. The entire 72 hours, that whilst the discovery does not guarantee that the intrusion is stopped opens the entire system up for all kinds of hackers to have a go at that victim and truly make a much bigger mess of it all. Now the people should be informed, but the entire 72 hours was (as I personally see it) pulled out of a hat. In all this the latest Facebook issue was not done by hackers, it was done by corporations who intentionally abused the system, they set their profit knowingly at the expense of the users of that system and exactly who at Cambridge Analytica is currently under arrest and in prison? It seems to me that Facebook, clearly a victim here, has made mistakes, yet the transgressors are not held to vigorous account, yet the maker of the system is. Now, let’s be clear, Mark has clearly some explaining to do. Yet, when we see “Facebook failed in an attempt to get a handle on the Cambridge Analytica scandal Monday, after British authorities ordered its auditors to vacate the political consultancy’s offices” (source: Fortune), all this whilst the offices of Cambridge Analytica ended up being raided 5 days later, I have never seen authorities giving bank robbers that level of leeway, so why was this level of freedom given to Cambridge Analytica? When we consider that this data could be transplanted to writable objects (Blu-ray) in mere hours, it seems to me that giving them 5 days to wipe the evidence is a lot more questionable than merely thumping Facebook for the flaws.

The one part I truly disagree with is “Many of us have a need to connect and share. But expecting much privacy in a business model that relies on selling your information is highly unrealistic“, you see, here we see two levels of privacy, that what the person shares, free of will and that what is accessed. In one part the privacy from the outside is partially an easy thing, because Google with AdWords has shown that to be a clear option, their advertisers can create and address a population to the granularity available, yet the results of this marketing is done in a level of aggregation, individual records per person are not available. The fact that apps could capture it was a given, but the fact that all unique identifiers were optionally possible was kept in the shadows and that is where Cambridge Analytica worked. Now, this is a generalisation, but it fits the overall issues. Facebook could have done better, yet it was massively naive when it thought that the paying corporations would not try to get their fingers on EVERY part they could. In that I wonder what data the insurance companies in the end got a hold on.

So when I see “Tech investor Jason Calacanis has set up a contest — the Open Book Challenge — to create a Facebook replacement. Finalists will be given $100,000 and residence in a 12-week incubator“, when we see it in the light of “Facebook has delivered Zuckerberg a net worth of over $60 billion” must be the easiest pickings for Jason Calacanis that any entrepreneur has ever been a part of. It is like the pyramid games after 15 rounds whilst the top person stayed on top never having to pay more than 0.0001% of the total earning, not even Las Vegas in its wildest times offered such odds.

So I am very much against regulations, it is merely a way for governments to get a hold of that data. Now I am not against that if it truly serves national security, but the fact that actual criminals and terrorists use such systems to elude identification and strike form a distance merely makes it a waste of time and most analysts know this. Now, we also know that when we know where exactly to look, Facebook could reveal stuff, but to hold those billions of accounts to optionally find merely one person is an extremely bad application of time management.

In the end, the one additional part I liked was Zuckerberg stating “It was my mistake, and I’m sorry. I started Facebook. I run it. And I’m responsible for what happens here”. I like it because of the realisation that in all the bungles of IBM in the last 30 years, especially the PS/2 range, at what point did any of them stand up and tell their consumers that they screwed up? Especially in line of the setting that the average Model 80 (80386) computer was 400% more expensive at merely 28% of the power of a Taiwan clone, in addition the on board time clock battery has given the user more headaches than a hammer and the graphical underperformance offered should be forgotten at the drop of any hat.

So in this Zuckerberg kept his head high and in all this the entire setting of data abuse is still not addressed by either the US or UK government, in all this there is absolutely no indication that the abusers will be facing punishment or prison, so in all this the law failed the people a lot more than Facebook ever did, especially in the light of issues like this have been going on for years, but we do not get to read that part, do we?

 

Leave a comment

Filed under IT, Law, Media, Politics, Science

Identity denied

There are moments when we resort to other ways of expressing ourselves; it is in our nature to find alternatives to the story, so that we can tell the story. Nearly every person does it. Sometimes we ask ‘would you take that extra pastry?‘ instead of telling someone that you really feel like having another pastry. So when it comes to social media, we see not ourselves, but the person we want to be. We want to own the Hall of Faces (Game of Thrones) where we can mask ourselves with the identity of a dead person, like Ethan Hawke in Mission Impossible, walk in, sound like the person we are not, because we do not like ourselves in that particular moment. So when we look at Facebook, are we thinking the Hall of faces? In light of all that was revealed, are we in a stage where we prefer to be someone else?

You see, the shit is on the walls as some would say. Mark the Zuckyman did the right thing, he stood up (after a few days of silence) and held himself responsible and we are all over this that he is the culprit, but is he truly guilty? We see all kinds of articles on Facebook, like ‘You’ve decided to delete Facebook but what will you replace it with?‘ (at https://www.theguardian.com/technology/2018/mar/31/youve-decided-to-delete-facebook-but-what-will-you-replace-it-with), even after a week this is still highly valid, because for millions of the multibillion users of Facebook, it has yet to sink in. Go to WhatsApp? Instagram? Both are owned by Facebook, so where does that leave you? So when we try to trivialise it with #DeleteFacebook, we need to realise that this is new territory. We now talk about the Social Media Landscape and it is not small. It is huge and most importantly, this is the first true generation of the Social media generation. We were not ready, and i have been trying to explain that to people for nearly 3 years. Now we see overreactions whilst sitting down contemplating it all was never an option. The law was missing it as it is more interested in facilitating for commerce, exploitation and profit (Sony and Microsoft are nice examples there), Human rights are failing, because the issue of Digital rights is only seen in the relation of commerce, not in the relation of privacy, in this the entire Google and the people’s rights to be forgotten is merely a reason to giggle, a Google giggle if you preferred.

The article has one funny part, with “For those determined to exit the Facebook ecosystem, the best approach is more likely to be a patchwork of sites and apps that mirror individual features. Messaging is the easiest: apps such as Telegram and Signal offer messaging and group chats, as well as voice calls, with encryption to keep your communications private. Telegram even has a thriving collection of chatbots, similar to Facebook Messenger“, you see, it is done on a smartphone (mostly), so you could consider dialing a person and have a conversation, your mum if she is still alive is not the worst idea to have. You see, the plain point is where you end up. So when we see “Part of Vero’s appeal to Facebook deleters is its determination to be ad-free. It is planning instead to start charging a small annual subscription at some point“, you see these people designed it for wealth (as one would) so where are they getting the money? The small annual subscription does make sense, but in light of that you better remember where all your data is and even as we see ‘emphasis on privacy‘ we need to realise that there are clear situations where the word Privacy is open to suggestion. What people forget is that ‘The boundaries and content of what is considered private differ among cultures and individuals, but share common themes‘, so are their settings of what is private the same as yours? Also, when they sell their company for a mere 2 billion, make no mistake, the word privacy is not open for debate, it will be whatever the new owner decides it to be. This is merely one side of data, as data is currency. That is what I have been trying to explain to nearly everyone (for 5 years now) and they all shrugged and stated, ‘it’ll be right‘, so is it right? Is it all right now? If you are considering becoming a member of the growing party of #DeleteFacebook it clearly was not.

So when we are treated to ‘News of Facebook’s secret tool to delete executive messages caps days of chaos‘ (at https://www.theguardian.com/technology/2018/apr/06/facebook-using-secret-tool-to-delete-messages-from-executives) we see another part of Facebook, we see new uproar. The question is whether this is justified. You see, when we see “the company has a two-tiered privacy standard (one for executives, one for everyone else) and over its use of facial recognition software“, in most cases this makes perfect sense. Corporate executives tend to be under scrutiny a lot, as it sometimes is valid; they still have a job to be done. I was amazed on how many people Mark Zuckerberg was connected to in the beginning of Facebook. It was awesome and cool, but I reckoned that it was not always constructive to productivity. I have been in places where the executives had their own server for a number of reasons, mostly for HR reasons and whether it is valid or not, it is a corporate decision, in that light I am not amazed, only when I was doing work for Google was I on a system where I could see everything and everyone all including what I thought was the board of directors. Here is where it gets interesting, because Google has a (what we refer to) a true open system for all who work there. It is invigorating to get access to so much information and my first night was me dreaming of combining things, what if we did ….. and ….. would we then be able to …..? It was exhilarating to feel that rush of creativity, in areas where I had no skill levels to boot. With a ‘closed’ system like Facebook, we need to consider that by setting the state of all is open that it is a legal trap when you give billions of people access to systems and situations. The mere legal differences between the UK, US and AUS, all common law nations would be the legal nightmare of decades. Shielding the executives from that is a first priority, because without them at the wheel it all falls to chaos.

That reality is seen with “Facebook says the change was made following the 2014 Sony Pictures hack, when a mass data breach at the movie studio resulted in embarrassing email histories being leaked for a number of executives, ultimately costing co-chair Amy Pascal her job“, some might remember the mail that George Clooney send in regards to the Monuments Man, it made pretty much all the papers. I love his work, I enjoy the artistic values he has, shares and embodies, but without certain levels of privacy and shielding his artistic side might take a large dump towards uncertainty, not a side I am hoping for, because even as he is merely 360 days older than me, he should be able to create another 30 years of movie excellence and I would like to see those movies, especially as we see that he is doing to Matt Damon in Suburbicon, what the Coen brothers were doing to him in Burn after reading and Hail, Caesar!, so plenty of fun times ahead for all us movie fans.

Even as we are all looking where we want to go next, the foundation of issues remain. There is an utter lack of Social media legislation; there is a mess of issues on where privacy is and what is to be regarded as privacy. The users gave it all away when they signed up for options, apps and ‘solutions’ again and again. Until that is settled, any move we make moves the issue and moves the problems, it will not solve anything, no matter what some of the app developers decide to state. In the third part “‘The third era of Zuck’: how the CEO went from hero to humiliation” (at https://www.theguardian.com/technology/2018/apr/06/mark-zuckerberg-public-image-cambridge-analytica-facebook), I think he got kicked in the head real hard, but not humiliated, although he might think he was. So as we recall Dean Martin with Ain’t That a Kick in the Head? we need to realise that is what happens. That is what happens when Social media becomes a multi-billion user behemoth. Mark Zuckerberg made mistakes plain and simple. What do you do? You get up from the floor, fix it and restore the need for growth. And now still we see that mistakes are made. This is seen with “On Friday morning, the company apologized and pledged to stop deleting executives’ messages until they could make the same functionality available to everyone“, the largest mistake and it opens social media to all kinds of organised crime. Merely send the threat, tell the people to do …. or else and after an hour, after it is seen to have been read, the message is deleted, it becomes a miscommunication and no prosecution is possible.

That is the biggest mistake of all, to set a multi-billion user group open to the needs of organised crime even further then it likely is. How stupid is that? You see, as I interpret this, both Sheryl Sandberg and Mark Zuckerberg are in the musical chair setting, trying to do things on the fly and that will hurt them a lot more than anything else. We get it that mistakes were made, fix them, but not on the fly and not just quick jumps overnight. Someone has pushed them into defence play and they actually suck at that. It is time for them to put their foot down and go into offensive and attack mode (pun intended). When we consider what was before, we get it that Zuckerberg made mistakes and he will make more. We merely need to look at Microsoft and their actions over the last 3 decades to see that they screwed to pooch even more royally than Zuckerberg will be able to do, but the media is silent there as it relies on Microsoft advertiser funds. IBM and Apple have made their blunders in the past as well, yet they all had one large advantage, the impact was never towards billions of users, it potentially could have hit them all, but it mostly just a much smaller group of people, that was their small blessing. Apple directly hurt me and when I lost out on $5500, I merely got a ‘C’est la vie‘ from their technical centre, so screw that part!

There will be a large change sooner rather than later, the issue with Cambridge Analytica was too large to not make that happen. I merely hope that Zuckerberg has his ducks on a row when he makes the jump, in addition to that was Steve Bannon arrested? Especially when we consider Article 178, violating the Free decision of Voters. You see, it is not that simple, social media has never been used in that way, to such an extent, the law is unclear and proving that what Cambridge Analytica did would constitute a clear violation of the free decision of voters, that is what makes this a mess, legislation on a global scale has failed when it came to privacy and options regarding the people in social media. Steve Bannon can keep on smiling because of all the visibility he will get for years to come and after years when no conviction comes, he can go on the ‘I told you so!‘ horse and ride of wealthy into the sunset. That situation needs to be rectified and it needs to go way beyond Facebook, the law itself has faltered to a much larger degree.

The fact that politicians are all about terror cells and spilling inflammatory messages whilst having no resolution on any of this is merely showing what a bunch of apes they have proven themselves to be. So when we saw in January ‘Facebook, Google tell Congress they’re fighting extremist content‘, where were these congressmen? Where the fuck was Clint Watts, the Robert A. Fox Fellow at the Foreign Policy Research Institute, and National Security analyst as CNN now reports that optionally 78 million records have been pushed onto the Russian servers? (at https://edition.cnn.com/2018/04/08/politics/cambridge-analytica-data-millions/index.html), now implying that Cambridge Analytica has undermined US safety and security in one operation to a much larger extent than any terrorist has been able to achieve since September 13th 2001. That is 17 years of figments, against one political setting on the freedom to choose. I wonder how Clint Watts can even validate his reasoning to attend the US Congress at all. And this goes way beyond the US; in this the European Commission could be regarded as an even larger failure in all this. But it is unlikely we ever get treated to that side of the entire show.

The media needs both players a lot more and bashing Facebook makes for good entertainment they reckon. Time will tell whether they were right, or that the people at large just never cared, we merely end up having no social media identity, it will have been denied for reasons that were never real in the first place.

 

Leave a comment

Filed under Uncategorized

The sting of history

There was an interesting article on the BBC (at http://www.bbc.com/news/business-43656378) a few days ago. I missed it initially as I tend to not dig too deep into the BBC past the breaking news points at times. Yet there it was, staring at me and I thought it was rather funny. You see ‘Google should not be in business of war, say employees‘, which is fair enough. Apart from the issue of them not being too great at waging war and roughing it out, it makes perfect sense to stay away from war. Yet is that possible? You see, the quote is funny when you see ‘No military projects‘, whilst we are all aware that the internet itself is an invention of DARPA, who came up with it as a solution that addressed “A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions“, which let to ARPANET and became the Internet. So now that the cat is out of the bag, we can continue. The objection they give is fair enough. When you are an engineer who is destined to create a world where everyone communicates to one another, the last thing you want to see is “Project Maven involves using artificial intelligence to improve the precision of military drone strikes“. I am not sure if Google could achieve it, but the goal is clear and so is the objection. The BBC article show merely one side, when we go to the source itself (at https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/), in this I saw the words from Marine Corps Colonel Drew Cukor: “Cukor described an algorithm as about 75 lines of Python code “placed inside a larger software-hardware container.” He said the immediate focus is 38 classes of objects that represent the kinds of things the department needs to detect, especially in the fight against the Islamic State of Iraq and Syria“. You see, I think he has been talking to the wrong people. Perhaps you remember the project SETI screensaver. “In May 1999 the University of California launched SETI@Home. SETI stands for the” Search for Extraterrestrial Intelligence,” Originally thought that it could at best recruit only a thousand or so participants, more than a million people actually signed up on the day and in the process overwhelmed the meager desktop PC that was set aside for this project“, I remember it because I was one of them. It is in that trend that “SETI@Home was built around the idea that people with personal computers who often leave them to do something else and then just let the screensaver run are actually wasting good computing resources. This was a good thing, as these ‘idle’ moments can actually be used to process the large amount of data that SETI collects from the galaxy” (source: Manilla Times), they were right. The design was brilliant and simple and it worked better than even the SETI people thought it would, but here we now see the application, where any android (OK, IOS too) device created after 2016 is pretty much a supercomputer at rest. You see, Drew Cukor is trying to look where he needs to look, it is a ‘flaw’ he has as well as the bulk of all the military. You see, when you look for a target that is 1 in 10,000, so he needs to hit the 0.01% mark. This is his choice and that is what he needs to do, I am merely stating that by figuring out where NOT to look, I am upping his chances. If I can set the premise of illuminating 7,500 false potential in a few seconds, his job went from a 0.01% chance to 0.04%, making his work 25 times easier and optionally faster. Perhaps the change could eliminate 8,500 or even 9,000 flags. Now we are talking the chances and the time frame we need. You see, it is the memo of Bob Work that does remain an issue. I disagree with “As numerous studies have made clear, the department of defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,“. The clear distinction is that those people tend to not rely on a smartphone, they rely on a simple Nokia 2100 burner phone and as such, there will be a complete absence of data, or will there be? As I see it, to tackle that, you need to be able to engage is what might be regarded as a ‘Snippet War‘, a war based on (a lot of) ‘small pieces of data or brief extracts‘. It is in one part cell tower connection patterns, it is in one part tracking IMEI (International Mobile Equipment Identity) codes and a part of sim switching. It is a jumble of patterns and normally getting anything done will be insane. Now what happens when we connect 100 supercomputers to one cell tower and mine all available tags? What happens when we can disseminate these packages and let all those supercomputers do the job? Merely 100 smart phones or even 1,000 smart phones per cell tower. At that point the war changes, because now we have an optional setting where on the spot data is offered in real time. Some might call it ‘the wet dream’ of Marine Corps Col. Drew Cukor and he was not ever aware that he was allowed to adult dream to that degree on the job, was he?

Even as these people are throwing AI around like it is Steven Spielberg’s chance to make a Kubrick movie, in the end it is a new scale and new level of machine learning, a combination of clustered flags and decentralised processing on a level that is not linked to any synchronicity. Part of this solution is not in the future, it was in the past. For that we need to read the original papers by Paul Baran in the early 60’s. I think we pushed forward to fast (a likely involuntary reaction). His concept of packet switching was not taken far enough, because the issues of then are nowhere near the issues of now. Consider raw data as a package and the transmission itself set the foundation of the data path that is to be created. So basically the package becomes the data entry point of raw data and the mobile phone processes this data on the fly, resetting the data parameters on the fly, giving instant rise to what is unlikely to be a threat and optionally what is), a setting where 90% could be parsed by the time it gets to the mining point. The interesting side is that the container for processing this could be set in the memory of most mobile phones without installing stuff as it is merely processing parsed data, not a nice, but essentially an optional solution to get a few hundred thousand mobiles to do in mere minutes what takes a day by most data centres, they merely receive the first level processed data, now it is a lot more interesting, as thousands are near a cell tower, that data keeps on being processed on the fly by supercomputers at rest all over the place.

So, we are not as Drew states ‘in an AI arms race‘, we are merely in a race to be clever on how we process data and we need to be clever on how to get these things done a lot faster. The fact that the foundation of that solution is 50 years old and still counts as an optional way in getting things done merely shows the brilliance of those who came before us. You see, that is where the military forgot the lessons of limitations. As we shun the old games like the CBM 64, and applaud the now of Ubisoft. We forget that Ubisoft shows to be graphically brilliant, having the resources of 4K camera’s, whilst those on the CBM-64 (Like Sid Meier) were actually brilliant for getting a workable interface that looked decent as they had the mere resources that were 0.000076293% of the resources that Ubisoft gets to work with me now. I am not here to attack Ubisoft, they are working with the resources available, I am addressing the utter brilliance of people like Sid Meier, David Braben, Richard Garriott, Peter Molyneux and a few others for being able to do what they did with the little they had. It is that simplicity and the added SETI@Home where we see the solutions that separates the children from the clever Machine learning programmers. It is not about “an algorithm of about 75 lines of Python code “placed inside a larger software-hardware container.”“, it is about where to set the slicer and how to do it whilst no one is able to say it is happening whilst remaining reliable in what it reports. It is not about a room or a shopping mall with 150 servers walking around the place, it is about the desktop no one notices who is able to keep tabs on those servers merely to keep the shops safe that is the part that matters. The need for brilliance is shown again in limitations when we realise why SETI@Home was designed. It opposes in directness the quote “The colonel described the technology available commercially, the state-of-the-art in computer vision, as “frankly … stunning,” thanks to work in the area by researchers and engineers at Stanford University, the University of California-Berkeley, Carnegie Mellon University and Massachusetts Institute of Technology, and a $36 billion investment last year across commercial industry“, the people at SETI had to get clever fast because they did not get access to $36 billion. How many of these players would have remained around if it was 0.36 billion, or even 0.036 billion? Not too many I reckon, the entire ‘the technology available commercially‘ would instantly fall away the moment the optional payoff remains null, void and unavailable. $36 billion investment implies that those ‘philanthropists’ are expecting a $360 billion payout at some point, call me a sceptic, but that is how I expect those people to roll.

The final ‘mistake’ that Marine Corps Col. Drew Cukor makes is one that he cannot be blamed for. He forgot that computers should again be taught to rough it out, just like the old computers did. The mistake I am referring to is not an actual mistake, it is more accurately the view, the missed perception he unintentionally has. The quote I am referring to is “Before deploying algorithms to combat zones, Cukor said, “you’ve got to have your data ready and you’ve got to prepare and you need the computational infrastructure for training.”“. He is not stating anything incorrect or illogical, he is merely wrong. You see, we need to realise the old days, the days of the mainframe. I got treated in the early 80’s to an ‘event’. You see a ‘box’ was delivered. It was the size of an A3 flatbed scanner, it had the weight of a small office safe (rather weighty that fucker was) and it looked like a print board on a metal box with a starter engine on top. It was pricey like a middle class car. It was a 100Mb Winchester Drive. Yes, 100Mb, the mere size of 4 iPhone X photographs. In those days data was super expensive, so the users and designers had to be really clever about data. This time is needed again, not because we have no storage, we have loads of it. We have to get clever again because there is too much data and we have to filter through too much of it, we need to get better fast because 5G is less than 2 years away and we will drown by that time in all that raw untested data, we need to reset our views and comprehend how the old ways of data worked and prevent Exabyte’s of junk per hour slowing us down, we need to redefine how tags can be used to set different markers, different levels of records. The old ways of hierarchical data was too cumbersome, but it was fast. The same is seen with BTree data (a really antiquated database approach), instantly passing through 50% data in every iteration. In this machine learning could be the key and the next person that comes up with that data solution would surpass the wealth of Mark Zuckerberg pretty much overnight. Data systems need to stop being ‘static’, it needs to be a fluidic and dynamic system, that evolves as data is added. Not because it is cleverer, but because of the amounts of data we need to get through is growing near exponentially per hour. It is there that we see that Google has a very good reason to be involved, not because of the song ‘Here come the drones‘, but because this level of data evolution is pushed upon nearly all and getting in the thick of things is when one remains the top dog and Google is very much about being top dog in that race, as it is servicing the ‘needs’ of billions and as such their own data centres will require loads of evolution, the old ways are getting closer and closer to becoming obsolete, Google needs to be ahead before that happens, and of course when that happens IBM will give a clear memo that they have been on top of it for years whilst trying to figure out how to best present the delays they are currently facing.
 

Leave a comment

Filed under IT, Media, Military, Science

A windmill concussion

That was the first thought I had whilst looking at the Guardian (at https://www.theguardian.com/technology/2018/mar/01/eu-facebook-google-youtube-twitter-extremist-content) where Andrus Ansip was staring back at me. So the EU is giving Facebook and Google three months to tackle extremist content. In what relation is that going to be a workable idea? You see, there are dozens of ways to hide and wrongfully classify video and images. To give you an idea of what Mr Ansip is missing, let me give you a few details.

YouTube
300 hours of video is uploaded every minute.
5 billion videos watched per day.
YouTube gets over 30 million visits a day.

Facebook
500+ terabytes of data added each day.
300 million photos per day
2.5 billion pieces of content added each day

This is merely the action of 2 companies. We have not even looked at Snapchat, Twitter, Google+, Qzone, Instagram, LinkedIn, Netlog and several others. The ones I mentioned have over 100,000,000 registered users and there are plenty more of that size. The largest issue is not the mere size, it is that in Common Law any part of Defamation and the defence of dissemination becomes a player in all this, in Australia it is covered in section 32 of the Defamation Act 2005, the UK, the US and pretty much every Common Law nation has its own version of it, so the EU is merely setting the trend of all the social media hubs to move out of the EU and into the UK, which is good for the UK. The European courts cannot just blanket approve this, because it is in its core an attack on Freedom of Speech and Freedom of expression. I agree that this is just insane, but that is how they had set it up for their liberal non-accountable friends and now that it works against them, they want to push the responsibility onto others? Seems a bit weird does it not? So when we see “Digital commissioner Andrus Ansip said: “While several platforms have been removing more illegal content than ever before … we still need to react faster against terrorist propaganda and other illegal content which is a serious threat to our citizens’ security, safety and fundamental rights.”“, my question becomes whether the man has any clue what he is doing. Whilst the EC is hiding behind their own propaganda with “European governments have said that extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised“, it pretty much ignored the reality of it all. When we look to the new-tech (at https://www.theverge.com/2017/4/18/15330042/tumblr-cabana-video-chat-app-announced-launches-ios), where a solution like Cabana allows for video and instructions whilst screen does not show an image of the watchers, but a piece of carton with texts like “مجنون”, “الجن”, “عسل”, “نهر”, “جمل” and “تاجر”. How long until the threshold of ‘extreme video‘ is triggered? How long until the system figures out that the meeting ended 3 weeks ago and that the video had encryption?

It seems to me that Andrus Ansip is on a fool’s errant. An engineering graduate that went into politics and now he is in a place where he is aware but not clued in to the extent he needs to be (OK that was a cruel comparison by me). In addition, I seriously doubt that he has the largest clue on the level of data parsing that such systems require to be, not merely to parse the data but systems like that will raise false flags, even at 0.01% false flags, that means sifting through 50Mb of data sifted through EVERY DAY. And that is not taking into account, framed Gifs, instead of video of JPG, or text, languages and interpreting text as extreme, so there will be language barriers as well. So in all this even with AI and machine learning, you would need to get the links. It becomes even more complex when Facebook or YouTube start receiving 4chan Video URL’s. So when I see “and other internet companies three months to show that they are removing extremist content more rapidly“, I see the first piece of clear evidence that the European Commission has lost control, they have no way of getting some of this done and they have no option to proceed. They have gone into blame mode with the ultimatum: ‘Do this or else‘. They are now going through the issues that the UK faced in the 60’s with Pirate radio. I remember listening to Radio Caroline in the evening, and there were so many more stations. In that regard, the movie The Boat That Rocked is one that Andrus Ansip should watch. He is the Sir Alistair Dormandy, a strict government minister who endeavours to shut down pirate radio stations in all this. A role nicely played by Kenneth Brannagh I might add. The movie shows just how useless the current exercise is. Now, I am all for finding solutions against extremist video, but when you consider that a small player like Heavy.com had an extreme video online for well over a year (I had the link in a previous article), whilst having no more than a few hundred video’s a week and we see this demand. How ludicrous is the exercise we see now?

The problem is not merely the online extremist materials, it is also the setting of when exactly it becomes ‘extremist‘, as well as realising that when it is a link that goes to a ‘dedicated’ chat group the lone wolves avoid all scrutiny and nothing is found until it is much too late, yet the politicians are hiding behind this puppet presentation, because that is what they tend to do.

So when we look at “It also urged the predominantly US-dominated technology sector to adopt a more proactive approach, with automated systems to detect and remove illegal content, something Facebook and Google have been pushing as the most effective way of dealing with the issue. However, the European Digital Rights group described the Commission’s approach as putting internet giants in charge of censoring Europe, saying that only legislation would ensure democratic scrutiny and judicial review“, we see dangers. That is because, ‘automated systems aren’t‘, ‘censoring can’t‘ and ‘democratic scrutiny won’t‘; three basic elemental issues we are confronted with for most of our teenage life and after that too. So there are already three foundational issues with a system that has to deal with more stored data than we have seen in a history spanning 20 years of spam, yet here we see the complication that we need to find the needle in a field full of haystacks and we have no idea which stack to look in, whether the needle is a metal one and how large it is. Anyone coming to you with: ‘a simple automated system is the solution’ has no idea on what a solution is, has no idea how to automate it and has never seen the scope of data in the matter, so good luck with that approach!

So when we are confronted with “The UK government recently unveiled its own AI-powered system for tackling the spread of extremist propaganda online, which it said would be offered to smaller firms that have seen an increase in terrorist use as they seek to avoid action by the biggest US firms“, I see another matter. You see, the issues and options I gave earlier are already circumventing to the larger degree “The technology could stop the majority of Isis videos from reaching the internet by analysing the audio and images of a video file during the uploading process, and rejecting extremist content“, what is stated (at https://www.theguardian.com/uk-news/2018/feb/13/home-office-unveils-ai-program-to-tackle-isis-online-propaganda), until that upload solution is pushed to 100% of all firms, so good luck with that. In equal measure we see “The AI technology has been trained by analysing more than 1,000 Isis videos, automatically detecting 94% of propaganda with a 99.99% success rate” and here I wonder that if ISIS changes its format, and the way it gives the information (another reference to the Heavy.com video), will the solution still work or will the makers need to upgrade their video solution.

They are meaningless whilst chasing our tails in this and even as I agree that a solution is required, we see the internet as an open system where everyone is watching the front door, but when one person enters the building through the window, the solution stops working. So what happens when someone starts making a new codec encoder that has two movies? Remember the old ‘gimmicky‘ multi angle DVD’s? Was that option provided for? how about video in video (picture in picture variant), the problem there is that with new programming frameworks it becomes easier to set the stage into multi-tier productions, not merely encoding, but a two stage decoder where only the receiver can see the message. So the setting of “extremist content on the web has influenced lone-wolf attackers who have killed people in several European cities after being radicalised” is unlikely to be stopped, moreover, there is every chance that they never became a blip on the radar. In that same setting when we see “If the platform were to process 1m randomly selected videos, only 50 would require additional human review“, from the Daily statistics we get that 300 hours of video is uploaded every minute, so in that regard, we get a total of 26 million hours of video to parse, so if every movie was 2 minutes, we get to parse 21 million videos every day and that means over 1000 movies require vetting every day, from merely one provider. Now that seems like an optional solution, yet what if the signal changes? What if the vetting is a much larger problem? Don’t forget it is not merely extremist videos that get flagged, but copyrighted materials too. When we see that the average video length was 4 minutes and 20 seconds, whilst the range is between 42 seconds and 9:15, how will the numbers shift? This is a daily issue and the numbers are rising, as well as the providers and let’s not forget that this is ONE supplier only. That is the data we are confronted with, so there are a whole lot of issues that are not covered at all. So the two articles read like the political engines are playing possum with reality. And all this is even before the consideration that a hostile player could make internet servers available for extremists, the dark web that is not patrolled at all (read: almost impossible to do so) as well as lazy IT people who did not properly configure their servers and an extremist sympathiser has set up a secondary non indexed domain to upload files. All solutions where the so called anti-ISIS AI has been circumvented, and that is merely the tip of the iceberg.

So I have an issue with the messaging and the issues presented by those who think they have a solution and those who will callously blame the disseminators in all this, whilst the connected players know that this was never a realistic exercise in any part of this, merely the need and the desire to monitor it all and the articles given show that they are clueless (to some extent), which is news we never wanted ISIS to know in the first place. In that regard, when we see news that is a year old, where ISIS was mentioned that they use Twitter to recruit, merely through messaging and monitoring, we see another part where these systems have failed, because a question like that could be framed in many ways. It is almost the setting where the creative mind can ask more questions than any AI can comprehend, that first realisation is important to realise how empty the entire setting of these ‘solutions’ are, In my personal view is that Andrus Ansip has a job that has become nothing more than a temporary castle in the sand before it is washed away by the tide. It is unlikely that this is his choice or desire, but that is how it has become, and there is supporting evidence. Take a look at the Washington Post article (at https://www.washingtonpost.com/news/the-intersect/wp/2014/09/25/absolutely-everything-you-need-to-know-to-understand-4chan-the-internets-own-bogeyman/?utm_term=.35c366cd91eb), where we see “participants can say and do virtually anything they want with only the most remote threat of accountability“, more important, monitoring that part is not impossible yet would require large resources, 4chan is equally a worry to some extend and what happens when ISIS merely downloads a 4chat or 4chan skeleton and places it on the dark web? There is close to no options to ever find them at that point, two simple acts to circumvent the entire circus, a part that Andrus Ansip should have (and he might have) informed the EC commissioners on, so we see the waste of large amounts of money and in the end there will be nothing to show for. Is that what we want to happen to keep ourselves safe? So when the ISIS person needs nothing but a mobile phone and a TOR browser how will we find them and stop the content? Well, there is a two letter word for that. NO! It ain’t happening baby, a mere realisation that can be comprehended by most people in the smallest amount of time.

By the way, when 5G hits us in less than 18 months, with the speeds, the bandwidth and the upload options as well as additional new forms if media, which optionally means new automated forms of Social Media, how much redesign will be required? In my personal book this reads like: “the chance that Europe will be introduced to a huge invoice for the useless application of a non-working solution, twice!” How you feel about that part?

In my view it is not about stopping the upload, it is about getting clever on how the information reaches those who desire, want and optionally need the information. We need to get a grip on that reality and see how we can get there, because the current method is not working. In that regard we can take a grip towards history, where in the Netherlands Aage Meinesz used a thermal lance to go through the concrete next to the vault door, he did that in the early 70’s. So when we see the solutions we saw earlier, we need to remember that this solution only works until 10 seconds after someone else realises that there was a way to ignore the need of an upload, or realise that the system is assuming certain parts. You only need to look through Fatal Vision Alcohol goggles once, to realise that it does not only distort view, it could potentially be used to counter a distorted view, I wonder how those AI solutions comprehend that and consider that with every iteration accuracy decreases, human intervention increases and less gets achieved, some older gimmicks in photography relied on such paths to entice the watchers (like the old Betty Page books with red and green glasses). I could go on for hours, and with every other part more and more flaws are found. In all this it is equally a worry to push this onto those tech companies. It is the old premise of being prepared for that what you do not know, that what you cannot see and that what is not there. The demand of the conundrum, one that Military Intelligence was faced with for over 30 years and the solution needs to be presented in three months.

The request has to be adhered to in three months, it is ludicrous and unrealistic, whilst in addition the demands shows a level of discrimination as there is a massive size of social media enablers that are not involved; there are creators of technology providers that are not accountable to any level. For example Apple, Samsung, Microsoft and IBM (as they are not internet companies), yet some of them proclaim their Deep Blue, Azure and whatever other massive data mining solution provider in a box for ‘everyone’, so where are they in all this? When we consider those parts, how empty is the “face legislation forcing them to do so” threat?

It becomes even more hilarious, when you consider the setting in full, so Andrus Ansip, the current European Commissioner for Digital Single Market is giving us this, whilst we see (at https://ec.europa.eu/commission/priorities/digital-single-market_en) that the European Commission for Digital single market has there on its page the priority for ‘Bringing down barriers to unlock online opportunities’, which they use to create barriers, preferably flexible barriers and in the end it is the creation on opportunities for a very small group of designers and whilst we see that ‘protect children and tackle hate speech‘ is the smallest part of one element in a setting with 7 additional setting on a much larger scale. It seems to me that in this case Andrus Ansip is trying to extent his reach by the size of a continent, it does not add up on several sides, especially when you consider that the documents setting in that commission has nothing past September 2017, which makes the entire setting of pushing social media tech groups as a wishful thinking one, and one that was never realistic to begin with, it’s like he merely chasing windmills, just like Don Quichotte.

 

Leave a comment

Filed under IT, Media, Politics, Science

Insights or Assumptions?

Yesterday’s article in the Washington Post (at https://www.washingtonpost.com/news/global-opinions/wp/2018/01/22/the-rise-of-saudi-arabias-crown-prince-reveals-a-harsh-truth) is an interesting one. In this article Professor Bernard Haykel gives a view on the issues we are optionally likely to see in Saudi Arabia. I am not sure I can agree. You see, he might be the professor of the ‘Near Eastern Studies and the director of the Institute for Transregional Study of the Contemporary Middle East, North Africa and Central Asia’ at a prestigious place like Princeton, but my pupils tend to shape like question marks when someone’s title requires 13 words to be merely one part. We see in the article “depict him as power-hungry and corrupt, and cite these two impulses for his behavior and policies. When King Salman designated MBS as his heir in June 2017, MBS effectively became the most powerful man in the kingdom. And despite ill-advised purchases (including a yacht and a French chateau, which have cemented the impression of the crown prince’s greed)“, so how does that work? You see Prince Mohammed bin Salman is wealthy, his family is very wealthy, and as such is a yacht a splurge? It would depend on the price. Second there is the mention on a French Chateau. Well, I have taken a look and I fell in love with a house in France too, in Cognac (my favourite drink). The house (at http://www.rightmove.co.uk/overseas-property/property-58209296.html), has 7 bedrooms, is amazing in looks and in a nice village. The amount comes down to a little over a million dollars (money I obviously do not have), but consider that the same amount will only get you a decent 2 bedroom apartment in the outskirts of Sydney, within some suburbs and in the city, those prices will go up from 250%-1500%, depending on how outlandish your view needs to be, in a measly 2-3 bedroom apartment. So how does that make the Crown Prince greedy? Now his choice is a chateau 50 times that price and a family that owns billions can splurge a little. His place is west of Paris. And let’s face it, as some economies are going, having your money in something substantial is not the worst idea. His second splurge, linking him to greed and power hunger is a yacht. So how does that leap rhyme? I have no idea and I find the professors view slightly too speculative. Yet, the man is not done. He then gives us: “MBS is trying to deal with a harsh truth about Saudi Arabia: The kingdom is economically and politically unsustainable, and is headed toward a disaster“. There is a truth in that. As Saudi Arabia is dependent on oil, there will be a lull in their lives, as the need for oil exists, with prices going down, there is no real prospect of fixing it, but wait that is exactly what the crown prince is doing. He is setting forth his 2030 view, a growing move away from oil dependency, which is actually a really good thing to do. It does not make him greedy, merely a visionary that technological evolution is essential to the continuing future of Saudi Arabia. We then get two quotes that matter. The first I already gave light on with “a sclerotic state with limited administrative capacity and an economy that is largely reliant on declining oil revenues“, yet sclerotic? That means “losing the ability to adapt“, which is exactly what the crown prince is trying to achieve, adapt the nation to other options and new ways. The second is a lot harsher, but requires additional focus. With: “a venal elite comprised of thousands of royals and hangers-on who operate with impunity and are a huge drain on the economy. It is saddled with a bloated public sector which employs 70 percent of working Saudis, and its military is incapable of defending the homeland despite billions spent on armaments“, so we can argue on the wisdom of ‘employs 70 percent of working Saudis‘, I am not stating that it is true, but when we see Walmart in the US, who employs 1% of Americans pumping billions of profit into that one Walton family, we should wonder how wrong the Saudi actions are. So we might not see corporate greed like in the US, but is one method better than the other? I am not sure that this is the case. The other part I need to comment on is: “its military is incapable of defending the homeland“, what evidence is there (it is not in the article at all)? Let’s not forget that Iran has been a warmongering nation for DECADES! How many wars did Saudi Arabia get into? There was the Saudi -Yemeni war of 1934, The Gulf War, where Saudi Arabia was a member of the allied forces, the Saudi intervention in Yemen and the current upcoming conflict with Iran. So, regarding the inability to defend the homeland? Is that perhaps merely gesture towards the incoming missiles from Yemen? Well, we can bomb the bejezus out of Yemen, but it would imply thousands of civilian casualties as these people are hiding in the civilian masses. Something they learned from groups like Hamas and Hezbollah I would reckon, but that this is merely an assumption from my side. I found the restraint that Saudi Arabia has shown so far quite refreshing.

I am not stating that Saudi Arabia is holier than thou. Like any nation, it makes mistakes; it has views and a set infrastructure. It is moving at a pace that they want, not the pace Wall Street wants, which is equally refreshing.

The article gives us truths, but from a polarised setting as I see it. Yes, there is acknowledgement on the achievements too, in both the directions of the USA and Russia, and we can agree that just like 86% of all other nations (including the USA) that the economy is a weak point. So how is America dealing with a 20 trillion in debt? From my point of view, the USA has not done anything in that direction for over a decade. Instead of lowering the corporate tax to the degree it did, it could have left it 5% higher and let that part be reserved of paying of the debt and interest, oh right, the 5% will not even take care of the interest at present, so as such the USA is in a much worse place at present, which is not what the article is about, but we should take that into consideration, and the end of the article? With “Ultimately, MBS wants to base his family’s legitimacy on the economic transformation of the country and its prosperity. He is not a political liberal. Rather, he is an authoritarian, and one who sees his consolidation of power as a necessary condition for the changes he wants to make in Saudi Arabia“, is that true? The facts are likely true and when you employ 70% of a nation, economic transformations are the legitimacy of that nation. There is the one side Americans never understood. In the end, Saudi Arabia is a monarchy; their duty is the welfare of that nation. So it does not make him authoritarian (even as he might be seen as much), he is the upcoming new monarch of Saudi Arabia, a simple truth. Within any monarchy there is one voice, the King/Queen of that nation. So it is in theory consolidation of power, in actuality it is a monarch who wants all voices and looks to be towards an area of focus, what that is, the future will tell, but in the end, until the Iran-Saudi Arabia issue is solved, there will be plenty of space for chaos.

In this his path is clear and that is the part the professor did illuminate too. With: “MBS is trying to appeal to young Saudis, who form the majority of the population. His message is one of authoritarian nationalism, mixed with populism that seeks to displace a traditional Islamic hyper-conservatism — which the crown prince believes has choked the country and sapped its people of all dynamism and creativity“, it is his need to create a population that is nationalistic, that sees Saudi Arabia as a place of pride, which is not a bad thing. In a setting where the end of hyper-conservatism, as it can no longer reflect any nation in a global economy, is an essential path. He is merely conservative in not handing out all those large benefits and multi-billion dollar revenue in the hands of opportunists who are eager to take those billions over the border, out of Saudi Arabia at the drop of a hat, any hat. That will drag down the Arabian economy with absolute certainty. A dynamic and creative nation, especially fuelled by youth and enthusiasm could spell several wells of innovation and profit that could benefit Saudi Arabia. I think that the path from hyper-conservatism towards where it needs to be in 2023 is so far well played. He is not there yet, but the path is starting and that is in the end a good thing. The only thing that the US needs to fear now is that the creative and innovation path that Saudi Arabia is on, could spell long term problems for a nation that has been fixated on a iterative technology path where the US is no longer the front runner, they were surpassed by Asia some time ago, the US merely has Apple and Google. Oh no, they do not, because those are proclaimed global corporations. So where does that leave the US?

So as we see Bloomberg (at https://www.bloomberg.com/news/articles/2018-01-22/imf-sees-global-growth-picking-up-as-u-s-tax-cuts-gain-traction) gives us ‘IMF Says Global Growth Picking Up as U.S. Tax Cuts Take Hold‘, which is a number I find overly optimistic, Global growth is set to 3.9%, yet the bad news cycle has not started yet, so I reckon that if the global economy ends at 2.45% it would not be a bad achievement. In that light I find the mention “The IMF also predicted that the tax plan will reduce U.S. growth after 2022, offsetting earlier gains, as some of the individual cuts expire and the U.S. tries to curb its budget deficit“. I believe that the US economy takes a hard hit no later than 2020 and the idea of ‘curb its budget deficit‘ is equally amusing, they have not been able to do that for 15 years and as there is at present every chance that President Trump is a one term president only, the Democrats are now likely to win by large margin and the entire budget curbing would be immediately off the table, because spending is the one thing the democrats have proven to be utter experts in, they merely leave the invoices for others to deal with, which is equally unhealthy for any economy.

And in that article we see exactly the fears that are mounting towards Saudi Arabia too. With “the IMF flagged protectionism, geopolitical tensions and extreme weather as risks to the global economy” we see a new frontal attack starting on protectionism. Mentions like “A reduction of Germany’s surplus would help reduce global imbalances” and it is not one source, hundreds of articles over the last 16 hours alone, all hammering the protectionism word in a bad light. It is now becoming all about trade protectionism, even under the terms of Brexit, we saw on how people were stating that it was a disadvantage, the single market falls away and as such the UK cannot benefit. Now that Brexit is still pushing forward, the IMF is changing their tune and it is now on protectionism and trade protectionism. Another way to state that tariffs and import fees are now a problem, it is the final straw in giving large corporation the push for benefit they need and many are in the States (IBM, Microsoft, 3M and so on), they would benefit and even as I mention Brexit, it also affects Saudi Arabia. As we saw last July: “Being a WTO member, Saudi Arabia is expected to bind its tariffs on over three-fourths of U.S. exports of industrial goods at an average rate of 3.2 percent, while tariffs on over 90 percent of agricultural products will be set at 15 percent or lower“, so the IMF is not merely voicing the fear of the US, it is equally scared that the stimulus backlash is about to his impeding presented global growth, the protectionism and trade protectionism are set to plead for open doors, I wonder if that also means that patent protectionism would have to end. I doubt that because pharmacy is what keeps the US afloat in more than one way, and is not a subject that is allowed to be tinkered in.

So were these insights or speculations?

I believe both the professor and myself were doing both, I admit to that upfront, whilst the professor set it in a text that is acceptable yet should have been raising a few more questions that the Washington Post is bargaining for. We can argue that this is a good thing, but it is my personal belief that even as it was a good and insightful article, in the end all the mention of power hungry and corrupt, in the end he showed no real evidence that this was a move of a power hungry person, especially as the person in question (Prince Mohammed bin Salman) is set to be the future king of Saudi Arabia, the crown prince is at the tip of the pyramid, so he needs not be power hungry. That can only be shown if he starts expansion wars with his neighbours. In addition no evidence is shown of corruption, I do not state that this is not the case, but if you accuse a person of being corrupt it would be nice to add actual evidence of that, which is merely my point of view.

In the end, through insight and speculation, I hope that you got some insights of that and feel free to google ‘IMF protectionism‘ and see how many articles were added in the last week alone. It is clear that Davos is about removing limitations, not actually growing a true economy. Which implies from my point of view is that Davos is about big business and what they need, not what the people desperately require. Consider that when you read about the ‘World Economic Forum Annual Meeting’ and when you see who is present. My mind wonders on how many informal meetings there will be and how Theresa May is likely to get hammered on Brexit issues as Emmanuel Macron, Jean-Claude Juncker, Angela Merkel and perhaps even Donald Trump unite against Brexit. It is an assumption from my side, but at the end of the week, will I be proven wrong?

 

 

Leave a comment

Filed under Finance, Law, Media, Military, Politics, Science

Overpricing or Segregation?

What is enough in a PC? That is the question many have asked in the past. Some state that for gaming you need the max hardware possible; for those using a word processor, a spreadsheet, email and browse the internet, the minimum often suffices.

I have been in the middle of that equation for a long time; I was for well over a decade in the high end of it, as gaming was my life. Yet, the realisation became more and more that high end gaming is a game for those with high paying jobs was a reality we all had to face. Now we see the NVIDIA GeForce GTX Titan Xp 12GB GDDR5X Video Card at $1950, whilst we can do 4K gaming and that one card is a 4K 65″ TV with either the Xbox X or the PS4 pro. Now consider that this is merely the graphics card and that the high end PC requires an additional $2K that is where the PC with 4K gaming requires 4 thousand dollars. It is a little stretch, because you can get there with a little less, but then also the less requires the hardware to be replaced quicker. So I moved to console gaming and I never regretted it. We all agree that I have lost out, but I can live with that. I can truly enjoy gaming without the price. So in this situation, can someone explain to me how the new iMac Pro will cost you in its maximum setting $20,743? Is there any justification to need such an overpowered device? I reckon that those into professional video editing might need it, but when we consider those 43 people in Australia (on that high level) who else does it benefit?

In comparison, a maximised Mac Pro costs you $11,617, so it is almost 50% cheaper. Now the comparison is not fair because the iMac Pro has an optional 4TB SSD drive, and that is not a cheap item, but the issue is that the overpowering of hardware might seem cool and nice, but let’s be fair, when we compare this through MS Word, we see the issue. The bulk of all people will never use more than 20% of that text editor, which is a reality we face yet at $200 we do not care, take the price a hundred fold, with $20,000 in the balance it adds up and even as MS Word has one version the computers do have options, and a lesser option is available, in this, that new iMac Pro is in minimum configuration $7K and at twice the price of a 4K gaming machine, with no real option for gaming, is that not a system that is over the top?

Now, some might think it is, some will state it is not and it is really in the eyes of the beholder. Yet in this day and age, when we have been thrusted into a stage where mobiles and most computer environments are set to a 2-4 year stage at best, how should we see the iMac pro? In addition, where the base model of the pro is 100% more expensive than the upgraded iMac 27″, is there a level of disjointed presentation?

Well, some do not think in that way and they are right to see it as such. One source (ZDNet) gives us: “The iMac Pro is aimed at professionals working with video (a lot of video), those into VR, 3D modeling, simulations, animation, audio engineers and such“, a view I wholeheartedly agree with, yet that view and that image has not been given when we see the marketing, the Apple site and even the apple stores. Now, first off, the apple stores have not been misleading, most have kept to some strict version of ‘party line’ and that is not a wrong stance. Also the view that ZDNet gives us at the end is spot on. With “It’s Mac for the 1 percent of Mac users, not the 99 percent. For the 99 percent, yes, the iMac Pro is overpriced and just throwing away money, but for the 1 percent who need the sort of power that a system like that can generate, it’s very reasonably priced” and that is where we see the issue, Mac is now segregating the markets trying to get the elite back into the Mac fold. Their timing is impeccable. Microsoft made a mess of things and with the gaming industry in the chaotic view of hardware the PC industry has become a mess. It moved towards the gamers who now represent $100 billion plus already we see that others went on the games routine whilst to some extent ignoring the high end graphical industry. It is something that I have heard a few times and to be honest, I ignored it. I grew there whilst being completely aware of all the hardware, which was 15-25 years ago. The graphical hardware market grew close to 1000%, so when I needed to dig into the PC hardware for another reason, I was amazed just how much there was and how affordable some stuff was, but in the highest gaming tier, the one tier where the gamer and high end video editing need overlaps, we see a lag, because selling to 99 gamers and one video editor means that most will not give a toss about the one video editor. Most will know what they need, but that market is not well managed. Issues like video drivers and Photoshop CC 2017 against Windows 10 are just a few of the dozens upon dozens of issues that seems to plague these users. Important is that this is not just some Adobe issue; it seems that the issues are still in a stage of flux. With “Microsoft warned that the April 2017 security update package has a known issue that could affect users’ computers and which the company is seeking to fix” a few months ago, we are starting to see more and more that Windows forgot that its core was not merely the gamer, it was an elite user group that it had slowly snagged away from Apple and now Apple is striking back in the best way possible, by giving them that niche again, by pushing these people with money away, they might soon see that the cutting edge Azure targets for high end graphic applications become a pool of enjoyment for the core Microsoft Office users. A market that they are targeting just as Apple gets its ducks in a row and snatches that population away from them.

That is indeed a clever move, because that was the market that made Apple great in the first place. So as we read on how Azure is aiming for the ArcGIS Pro population, we see that Apple has them outgunned and outclassed and not by a small amount either. Here the iMac Pro could be the difference between real time prototyping and anticipated results awaiting aggregation. That would instantly make the difference between a shoddy $5K-$8K gaming system used for data and the iMac Pro at $20K that can crunch data like a famished piranha, you can wait and watch those results become reality before you finish your first coffee.

In addition, as soon as Apple makes the second step we will see them getting a decent chunk out of the Business Intelligence, forecasting and even the Enterprise sized dash boarding market, because with 18 cores, you can do it all at the same time. This is not the first, not the second and not even the third case where Microsoft dropped the ball. They went wide, and forgot about the core business needs (or so you would think). Yet, the question remains how many can or are willing to pay the $20K question, even as we know that there are options in the $8K and $13K setting in that same device, because there is room for change between 8 and 18 cores. It seems that for a lot the system is overpriced, we can all agree on that, but for those who are in the segregated markets, it is not about a new player, it is more that the windows driven PC market, they just lost a massively sized niche, it is the price we pay for catering to the largest denominator, the question then becomes: ‘Can Microsoft and will it hit back?

Time will tell, what is the case is that the waiting is over and 2018 could potentially see a massive shift of high end users towards Apple, a change we have not seen for the longest of times, I wish them well, because in the end many average users will benefit from such a shift as well, because in confusion there is profit and Microsoft is optionally becoming one of the larger confused places in 2018.

So why should I care?

Apple started something that will soon be copied by A-brands like ASUS. It will remain a PC, but they now see that the high end users they do have, they want to keep it. This makes it almost exactly 20 years after I learned this lesson the hard way. There was a Dutch sales shop who had a special deal, the deal was the Apple Performa, maxed (as far as that was possible) for almost $2750, I was happy as hell. My apple (My first 100% owned by my own self) and I had a great time. I never regretted buying it, but there was a snatch, 3 months later that same shop had the Power-Mac on special, the difference was well over 300%, the difference $1000 (a lot in those days), but still 300% more power and new software that would no longer support the Performa system and older models, a system outdated before the warranty ran out. We are about to see a similar shift. We know multi-core systems, they have been around for a while, yet the shift is larger, so as we see new technologies, new solutions pushed on us whilst the actual current solutions as still broken to some extent, we will be pushed into a choice, will we follow the core or fall behind? Even as we see the marketing babble now on how it is upper tier, merely for the 1% and we feel to be in agreement (for now) we see a first wave of segregation. As the followers will emphasise on the high end computers, we will see a new wave of segregation.

And? So what? I do not want to pay too much!

This is the valid response for many players, for many users, they do not have the needs IT people have, many merely see the need they have now and that is not wrong, not in this life as the economy is not coming back the way it needs to be. Yet two elements are taking over, the first is Microsoft, we can’t get around them for the most and as e-commerce and corporate industry is moving, shows to be both their option and their flaw. As we see more push where 90% of the Fortune 500 is now stated to be on the Microsoft cloud, we see the need for multi-core systems more and more. Even as some might remember the quote form early 2017 “Find out why it’s the most complete #cloud solution“, the rest is only now catching on that the Azure cloud is dangerous in several ways. Chip Childers, the fearless leader of the Cloud Foundry Foundation gives us “We are shifting to a “cloud-first” world more and more. Even with private data centres, the use of cloud technologies is changing how we think about infrastructure, application platforms and software development“, yet the danger is also there yet not mentioned. This danger is slowly pushed onto us through the change that the US gave yesterday. As Net Neutrality is being abolished, there is a real danger that certain blocks could grow on a global scale. So as we see trillions in market value shift, how long until other players will set up barriers and set minimum business needs and cater to them above all others?

Core Cloud Solutions become a danger, because it forces the contemplation that it is no longer about bandwidth and strength of your internet connection, the high end of business is moving back to the Mainframe standards that existed strongly before the 90’s started. It will be about CPU Time Used. So at that point it is not about the amount of data, but the reception of CPU channels, as such the user with a multi core system will have a massive advantage, and the rest is segregated back towards second level, decreased options. It does not change consumer use of places like Netflix, but when you require the power of your value to be in Azure, the multicore systems are the key to enable you and disable connection huggers and non-revenue connected users, consumers at a price for limited access.

This is the future we push for; it is not created by or instigated by Apple. It merely sees what will be needed in 4 years when 5G is the foundation of our lives. I saw part of this as I designed part of a solution that will solve the NHS issues in the UK, the Netherlands, Sweden and Germany, but I was slow to see that the lesson I was handed the hard way in 1997 is also around the corner. As Netflix and others (Google in part) is regressing towards the mean in some of their services and options that they will offer the global audience at large. The outliers (Google, Amazon, IBM, Microsoft and SAP) will soon be facilitators to the Expression Dataset of the next model of usage that comes. There will be a shift and it will go on until 2022, as 5G will enable some players like NTT Data and Tata Communications to get an elevated seat, perhaps even a seat at that very table.

They will decide over the coming years that there is a shift and as people decide the level of access that they are getting they will soon learn that they are not merely deciding for themselves, because the earlier their children get full access, the more options they will get beyond their tertiary education. Soon we will learn that access is almost everything, but we will not learn that lesson the way we thought we would. Even I have no idea how this will play out, but such a shift beyond the iteration IT world we see now is exciting beyond belief. I hope I will end up being part of that world, I have been part of the IT/BI Industry since 1980 and I am about to see a new universe of skills unfold before my very eyes. I wonder how far I am able to get into that part, because these players will all need facilitation of services and most of them have been commission driven for too long, meaning that they are already falling behind.

What a world we are about to need to live in!

 

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The Good, the Bad, and North Korea

This article is late in the making. There is the need to be first, but is that enough? At times it is more important to be well informed. So let’s start with the good. The good is that if there is a nuclear blast, North Korea need not worry. The game maker Bethesda made a management simulator called Fallout Shelter. You can, on your mobile device manage a fallout shelter, get the goods of food, energy and water. Manage how the people procreate and who gets to procreate. Fight off invaders and grow the population to 200 people, so with two of these shelters, North Korea has a viable solution to not become extinct. The bad news is that North Korea has almost no smart phones, so there is not a device around to actively grow the surviving community. Yes, this matter, and it is important to you. You see the Dutch had some kind of a media tour around 2012. There were no camera’s allowed, still the images came through, because as the cameras were locked away, the military and the official escorts were seemingly unaware that every journalist had a mobile with the ability to film. The escorting soldier had never seen a smartphone before in his life. So a year later, we get the ‘fake’ news in the Dutch Newspaper (at https://www.volkskrant.nl/buitenland/noord-korea-beweert-smartphone-te-hebben-ontwikkeld-niemand-gelooft-het~a3493503/) that North Korea finished ‘their’ own local smartphones. This is important as it shows just how backwards North Korea is in certain matters.

The quote “Zuid-Koreaanse computerexperts menen dat hun noorderbuur genoeg van software weet om cyberaanvallen uit te voeren, zoals die op banken en overheidswebsites van eerder dit jaar. Maar de ontwikkeling van hardware staat in Noord-Korea nog in de kinderschoenen“, stating: “South Korean computer experts believe that their northern neighbour knows enough of software to instigate cyber-attacks, such as those on banks and Government websites earlier this year. But the development of hardware in North Korea remains in its infancy“. I believe this to be a half truth. I believe that China facilitates to some degree, but it is keeping its market on a short leash. North Korea remains behind on several fronts and that would show in other fields too.

This is how the two different parts unite. You see, even as America had its hydrogen bomb in 1952, it did not get there in easy steps and it had a massive level of support on several fronts as well as the brightest minds that this plane had to offer. The same could be said for Russia at the time. The History channel of all places gives us “Opponents of development of the hydrogen bomb included J. Robert Oppenheimer, one of the fathers of the atomic bomb. He and others argued that little would be accomplished except the speeding up of the arms race, since it was assumed that the Soviets would quickly follow suit. The opponents were correct in their assumptions. The Soviet Union exploded a thermonuclear device the following year and by the late 1970s, seven nations had constructed hydrogen bombs“, so we get two parts here. The fact that the evolution was theoretically set to 7-10 years, the actual device would not come until much later. The other players who had nowhere near the academic and engineering capacity would follow close to 18 years later. That is merely an explosion, something North Korea is claiming to consider. With the quote “North Korea’s Foreign Minister has said the country may test a hydrogen bomb in the Pacific“, we need to realise that the operative word is ‘may‘. Even then there will be a large time lapse coming. Now, I am not trying to lull you into sleep. The fact that North Korea is making these steps is alarming to a much larger scale than most realise. Even if it fails, there is a chance that, because of failed safety standards, a setting that is often alien to North Korea, wherever this radiation is, it can impact the biological environment beyond repair; it is in that frame that Japan is for now likely the only one that needs to be truly worried.

All this still links together. You see, the issue is not firing a long range rocket; it is keeping it on track and aiming it precisely. Just like the thousands of Hamas rockets fired on Israel with a misfiring percentage of 99.92% (roughly), North Korea faces that same part in a much larger setting. You see ABC touched on this in July, but never gave all the goods (at http://www.abc.net.au/news/2017-07-06/north-korea-missile-why-it-is-so-difficult-to-intercept-an-icbm/8684444). Here we see: “The first and most prominent is Terminal High Altitude Area Defence, or THAAD, which the US has deployed in South Korea. THAAD is designed to shoot down ballistic missiles in the terminal phase of flight — that is, as the ballistic missile is re-entering the atmosphere to strike its target. The second relevant system is the Patriot PAC-3, which is designed to provide late terminal phase interception, that is, after the missile has re-entered the atmosphere. It is deployed by US forces operating in the region, as well as Japan.” You see, that is when everything is in a 100% setting, but we forget, North Korea is not there. You see, one of the most basic parts here is shown to undergrads at MIT. Here we see Richard C. Booton Jr. and Simon Ramo, executives at TRW Inc., which would grow and make military boy scouts like Northrop Grumman and the Goodrich Corporation. So these people are in the know and they give us: “Today all major space and military development programs recognize systems engineering to be a principal project task. An example of a recent large space system is the development of the tracking and data relay satellite system (TDRSS) for NASA. The effort (at TRW) involved approximately 250 highly experienced systems engineers. The majority possessed communications systems engineering backgrounds, but the range of expertise included software architecture, mechanical engineering, automatic controls design, and design for such specialized performance characteristics as stated reliability“, that is the name of the game and North Korea lacks the skill, the numbers and the evolved need for shielded electronic guidance. In the oldest days it would have been done with 10 engineers, but as the systems become more complex, and their essential need for accuracy required evolution, all items lacking in North Korea. By the way, I will add the paper at the end, so you can read all by yourself what other component(s) North Korea is currently missing out on. All this is still an issue, because even as we see that there is potentially no danger to the USA and Australia, that safety cannot be given to China and Japan, because even if Japan is hit straight on, it will affect and optionally collapse part of the Chinese economy, because when the Sea of Japan, or the Yellow sea becomes the ‘Glowing Sea’, you better believe that the price of food will go up by 1000% and clean water will be the reason to go to war over. North Korea no matter how stupid they are, they are a threat. When we realise just how many issues North Korea faces, we see that all the testosterone imagery from North Korea is basically sabre rattling and because they have no sabres, they will try to mimic it with can openers. The realisation of all this is hitting you now and as you realise that America is the only player that is an actual threat, we need to see the danger for what it is, it is a David and Goliath game where the US is the big guy and North Korea forgot their sling, so it becomes a one sided upcoming slaughter. It is, as I see it diplomacy in its most dangerously failed stage. North Korea rants on and on and at some point, the US will have no option left but to strike back. So in all this, let’s take one more look, so that you get the idea even better.

I got this photo from a CNN source, so the actual age was unknown, yet look at the background, the sheer antiquity that this desktop system represents. In a place where the President of North Korea should be surrounded by high end technology, we see a system that seems to look like an antiquated Lenovo system, unable to properly play games from the previous gaming generation, and that is their high technology?

So here we see the elements come together. Whether you see Kim Jong-un as a threat, he could be an actual threat to South Korea, Japan, China and Russia. You see, even if everything goes right, there is a larger chance that the missile gets a technology issue and it will prematurely crash, I see that chance at 90%, so even as it was fired at the US, the only ones in true peril are Japan, South Korea, Russia and last China, who only gets the brunt if the trajectory changes by a lot. After which the missile could accidently go off. That is how I see it, whatever hydrogen bomb element they think they have, it requires a lot of luck for North Korea to go off, because they lack the engineering capacity, the skills and the knowhow and that is perhaps even more scary than anything else, because it would change marine biology as well as the aftermath as it all wastes into the Pacific ocean for decades to come. So when you consider the impact that sea life had because of Hiroshima and Nagasaki for the longest time, now consider the aftermath of a bomb hundreds of times more powerful by a megalomaniac who has no regards for safety procedures. That is the actual dangers we face and the only issue is that acting up against him might actually be more dangerous, we are all caught between the bomb and an irradiated place. Not a good time to be living the dream, because it might just turn into a nightmare.

Here is the paper I mentioned earlier: booten-ramo

2 Comments

Filed under IT, Military, Politics, Science

A legislative system shock

Today the Guardian brings us the news regarding the new legislation on personal data. The interesting starts with the image of Google and not Microsoft, which is a first item in all this. I will get back to this. The info we get with ‘New legislation will give people right to force online traders and social media to delete personal data and will comply with EU data protection‘ is actually something of a joke, but I will get back to that too. You see, the quote it is the caption with the image that should have been at the top of all this. With “New legislation will be even tougher than the ‘right to be forgotten’ allowing people to ask search engines to take down links to news items about their lives“, we get to ask the question who the protection is actually for?

the newspapers gives us this: “However, the measures appear to have been toughened since then, as the legislation will give people the right to have all their personal data deleted by companies, not just social media content relating to the time before they turned 18“, yet the reality is that this merely enables new facilitation for data providers to have a backup in a third party sense of data. As I personally see it, the people in all this will merely be chasing a phantom wave.

We see the self-assured Matt Hancock standing there in the image and in all this; I see no reason to claim that these laws will be the most robust set of data laws at all. They might be more pronounced, yet in all this, I question how facilitation is dealt with. With “Elizabeth Denham, the information commissioner, said data handlers would be made more accountable for the data “with the priority on personal privacy rights” under the new laws“, you see the viewer will always respond in the aftermath, meaning that the data is already created.

We can laugh at the statement “The definition of “personal data” will also be expanded to include IP addresses, internet cookies and DNA, while there will also be new criminal offences to stop companies intentionally or recklessly allowing people to be identified from anonymous personal data“, it is laughable because it merely opens up venues for data farms in the US and Asia, whilst diminishing the value of UK and European data farms. The mention of ‘include IP addresses‘ is funny as the bulk of the people on the internet are all on dynamic IP addresses. It is a protection for large corporations that are on static addresses. the mention of ‘stop companies intentionally or recklessly allowing people to be identified from anonymous personal data‘ is an issue as intent must be shown and proven, recklessly is something that needs to be proven as well and not on the balance of it, but beyond all reasonable doubt, so good luck with that idea!

As I read “The main aim of the legislation will be to ensure that data can continue to flow freely between the UK and EU countries after Brexit, when Britain will be classed as a third-party country. Under the EU’s data protection framework, personal data can only be transferred to a third country where an adequate level of protection is guaranteed“, is this another twist in anti-Brexit? You see none of this shows a clear ‘adequate level of protection‘, which tends to stem from technology, not from legislation, the fact that all this legislation is all about ‘after the event‘ gives rise to all this. So as I see it, the gem is at the end, when we see “the EU committee of the House of Lords has warned that there will need to be transitional arrangements covering personal information to secure uninterrupted flows of data“, it makes me wonder what those ‘actual transitional arrangements‘ are and how come that the new legislation is covering policy on this.

You see, to dig a little deeper we need to look at Nielsen. There was an article last year (at http://www.nielsen.com/au/en/insights/news/2016/uncommon-sense-the-big-data-warehouse.html), here we see: “just as it reached maturity, the enterprise data warehouse died, laid low by a combination of big data and the cloud“, you might not realise this, but it is actually a little more important than most realise. It is partially seen in the statement “Enterprise decision-making is increasingly reliant on data from outside the enterprise: both from traditional partners and “born in the cloud” companies, such as Twitter and Facebook, as well as brokers of cloud-hosted utility datasets, such as weather and econometrics. Meanwhile, businesses are migrating their own internal systems and data to cloud services“.

You see, the actual dangers in all that personal data, is not the ‘privacy’ part, it is the utilities in our daily lives that are under attack. Insurances, health protection, they are all set to premiums and econometrics. These data farms are all about finding the right margins and the more they know, the less you get to work with and they (read: their data) will happily move to where ever the cloud takes them. In all this, the strong legislation merely transports data. You see the cloud has transformed data in one other way, the part Cisco could not cover. The cloud has the ability to move and work with ‘data in motion’; a concept that legislation has no way of coping with. The power (read: 8 figure value of a data utility) is about being able to do that and the parties needing that data and personalised are willing to pay through the nose for it, it is the holy grail of any secure cloud environment. I was actually relieved that it was not merely me looking at that part; another blog (at https://digitalguardian.com/blog/data-protection-data-in-transit-vs-data-at-rest) gives us the story from Nate Lord. He gives us a few definitions that are really nice to read, yet the part that he did not touch on to the degree I hoped for is that the new grail, the analyses of data in transit (read: in motion) is cutting edge application, it is what the pentagon wants, it is what the industry wants and it is what the facilitators want. It is a different approach to real time analyses, and with analyses in transit those people get an edge, an edge we all want.

Let’s give you another clear example that shows the value (and the futility of legislation). Traders get profit by being the first, which is the start of real wealth. So whoever has the fastest connection is the one getting the cream of the trade, which is why trade houses pay millions upon millions to get the best of the best. The difference between 5ms and 3ms results in billions of profit. Everyone in that industry knows that. So every firm has a Bloomberg terminal (at $27,000 per terminal), now consider the option that they could get you that data a millisecond faster and the automated scripts could therefor beat the wave of sales, giving them a much better price, how much are they willing to pay suddenly? This is a different level of armistice, it is weaponised data. The issue is not merely the speed; it is the cutting edge of being able to do it at all.

So how does this relate?

I am taking you back to the quote “it would amount to a “right to be forgotten” by companies, which will no longer be able to get limitless use of people’s data simply through default “tick boxes” online” as well as “the legislation will give people the right to have all their personal data deleted by companies“. The issue here is not to be forgotten, or to be deleted. It is about the data not being stored and data in motion is not stored, which now shows the futility of the legislation to some extent. You might think that this is BS, consider the quote by IBM (at https://www.ibm.com/developerworks/community/blogs/5things/entry/5_things_to_know_about_big_data_in_motion?lang=en), it comes from 2013, IBM was already looking at matters in different areas close to 5 years ago, as were all the large players like Google and Microsoft. With: “data in motion is the process of analysing data on the fly without storing it. Some big data sources feed data unceasingly in real time. Systems to analyse this data include IBM Streams “, here we get part of it. Now consider: “IBM Streams is installed on nearly every continent in the world. Here are just a few of the locations of IBM Streams, and more are being added each year“. In 2010 there were 90 streams on 6 continents, and IBM stream is not the only solution. As you read that IBM article, you also read that Real-time Analytic Processing (RTAP) is a real thing, it already was then and the legislation that we now read about does not take care of this form of data processing, what the legislation does in my view is not give you any protection, it merely limits the players in the field. It only lets the really big boys play with your details. So when you see the reference to the Bloomberg terminal, do you actually think that you are not part in the data, or ever forgotten? EVERY large newspaper and news outlet would be willing to pay well over $127,000 a year to get that data on their monitors. Let’s call them Reuter Analytic Systems (read: my speculated name for it), which gets them a true representation of all personalised analytical and reportable data in motion. So when they type the name they need, they will get every detail. In this, the events that were given 3 weeks ago with the ITPRO side (at http://www.itpro.co.uk/strategy/29082/ecj-may-extend-right-to-be-forgotten-ruling-outside-the-eu) sounds nice, yet the quote “Now, as reported by the Guardian, the ECJ will be asked to be more specific with its initial ruling and state whether sites have to delete links only in the country that requests it, or whether it’s in the EU or globally” sounds like it is the real deal, yet this is about data in rest, the links are all at rest, so the data itself will remain and as soon as HTML6 comes we might see the beginning of the change. There have been requests on that with “This is the single-page app web design pattern. Everyone’s into it because the responsiveness is so much better than loading a full page – 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. My goal would be a high-speed responsive web experience without having to load JavaScript“, as well as “having the browser internally load the data into a new data structure, and the browser then replaces DOM elements with whatever data that was loaded as needed“, it is not mere speed, it would allow for dynamic data (data in motion) to be shown. So when I read ‘UK citizens to get more rights over personal data under new laws‘, I just laughed. The article is 15 hours old and I considered instantly the issues I shown you today. I will have to wait until the legislation is released, yet I am willing to bet a quality bottle of XO Cognac that data in motion is not part of this, better stated, it will be about stored data. All this whilst the new data norm is still shifting and with G5 mobile technologies, stored data might actually phase out to be a much smaller dimension of data. The larger players knew this and have been preparing for this for several years now. This is also an initial new need for the AI that Google wants desperately, because such a system could ascertain and give weight to all data in motion, something IBM is currently not able to do to the extent they need to.

The system is about to get shocked into a largely new format, that has always been the case with evolution. It is just that actual data evolution is a rare thing. It merely shows to me how much legislation is behind on all this, perhaps I will be proven wrong after the summer recess. It would be a really interesting surprise if that were the case, but I doubt that will happen. You can see (read about that) for yourself after the recess.

I will follow up on this, whether I was right or wrong!

I’ll let you speculate which of the two I am, as history has proven me right on technology matters every single time (a small final statement to boost my own ego).

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

When the trust is gone

In an age where we see an abundance of political issues, an overgrowing need to sort things out, the news that was given visibility by the Guardian is the one that scared and scarred me the most. With ‘Lack of trust in health department could derail blood contamination inquiry‘ (at https://www.theguardian.com/society/2017/jul/19/lack-of-trust-in-health-department-could-derail-blood-contamination-inquiry), we need to hold in the first stage a very different sitting in the House of Lords. You see, the issues (as I am about to explain them), did not start overnight. In this I am implying that a sitting with in the dock Jeremy Hunt, Andrew Lansley, Andy Burham and Alan Johnson is required. This is an issue that has grown from both sides of the Isle and as such there needs to be a grilling where certain people are likely to get burned for sure. How bad? That needs to be ascertained and it needs to be done as per immediate. When you see “The contamination took place in the 1970s and 80s, and the government started paying those affected more than 25 years ago” the UK is about to get a fallout of a very different nature. We agree that this is the term that was with Richard Crossman, Sir Keith Joseph, Barbara Castle, David Ennals, Patrick Jenkin, Norman Fowler, and John Moore. Yet in that instance we need to realise that this was in an age that was pre computers, pre certain data considerations and a whole league of other measures that are common place at this very instance. I remember how I aided departments with an automated document system, relying on 5.25″ floppy’s, with the capability that was less than Wordstar or PC-Write had ever offered. And none of those systems had any reliable data storage options.

The System/36 was flexible and powerful for its time:

  • It allowed 80 monitors (see below for IBM’s description of a monitor) and printers to be connected. All users could access the system’s hard drive or any printer.
  • It provided password security and resource security, allowing control over who was allowed to access any program or file.
  • Devices could be as far as a mile from the system unit.
  • Users could dial into a System/36 from anywhere in the world and get a 9600 baud connection (which was very fast in the 1980s) and very responsive for connections which used only screen text and no graphics.
  • It allowed the creation of databases of very large size. It supported up to about 8 million records, and the largest 5360 with four hard drives in its extended cabinet could hold 1.453 gigabytes.
  • The S/36 was regarded as “bulletproof” for its ability to run many months between reboots (IPLs).

Now, why am I going to this specific system, as the precise issues were not yet known? You see in those days, any serious level of data competency was pretty much limited to IBM, at that time Hewlett Packard was not yet to the level it became 4 years later and the Digital Equipment Corporation (DEC) who revolutionised systems with VAX/VMS and it became the foundation, or better stated true relational database foundations were added through Oracle Rdb (1984), which would actually revolutionise levels of data collection.

Now, we get two separate quotes (not from the article) “Dr Jeremy Bradshaw Smith at Ottery St Mary health centre, which, in 1975, became the first paperless computerised general practice“, as well as “It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use“, the second one comes from the Oracle Rdb SQL Reference manual. The second part seems a bit of a stretch; consider the original setting of this. When we see Oracle’s setting of data integrity, consider the elements given (over time) that are now commonplace.

System and object privileges control access to application tables and system commands, so that only authorized users can change data.

  • Referential integrity is the ability to maintain valid relationships between values in the database, according to rules that have been defined.
  • A database must be protected against viruses designed to corrupt the data.

I left one element out for the mere logical reasons.

now, in those days, the hierarchy of supervisors and system owners was nowhere near what it is now (and often nowhere to be seen), referential integrity was a mere concept and data viruses were mostly academic, that is until we get a small presentation by Ralf Burger in 1986. It was in the days of the Chaos Computer Club and my trusty CBM-64.

These elements are to show you that data integrity existed in academic purposes, yet the designers who were in their data infancy often enough had no real concept of rollback data events, some would only be designed too long later, and in all this, the application of databases to the extent that was needed. It would not be until 1982 when dBase II came to the PC market from the founding fathers of what would later be known as Ashton-Tate, George Tate and Hal Lashlee would create a wave that would get us dBase III and with the creation of Clipper by the Nantucket Corporation, which would give a massive rise to database creations as well as the growth of data products that had never been seen before, as well as being the player that in the end propelled data quality towards the state it is nowadays. In this product databases did not just grow with the network abilities within this product nearly any final year IT person could have its portfolio of clients all with custom based products all data based. Within 2-3 years (which gets us to 1989), a whole league of data quality, data cleaning and data integrity base issues would surface for millions of places, all requiring solutions. It is my personal conviction that this was the point where data became adult, where data cleaning, data rollback as well as data integrity checks became actual issues that were seriously dealt with. So, here in 1989 we are finally confronted with the adult data issues that for the longest of times were only correctly understood by more than a few niche people who were often enough disregarded (I know that for certain because I was one of them).

So the essential events that could have prevented only to some degree the events we see in the Guardian with “survivors initially welcomed the announcement, while expressing frustration that the decades-long wait for answers had been too long. The contamination took place in the 1970s and 80s“, certain elements would not come into existence until a decade later.

So when we see “Liz Carroll, chief executive of the Haemophilia Society, wrote to May on Wednesday saying the department must not be involved in setting the remit and powers of an inquiry investigating its ministers and officials. She also highlighted the fact that key campaigners and individuals affected by the scandal had not been invited to the meeting“, I am not debating or opposing her in what could be a valid approach, I am merely stating that to comprehend the issues, the House of Lords needs to take the pulse of events and the taken steps forward from the Ministers who have been involved in the last 10 years.

When we see “We and our members universally reject meeting with the Department of Health as they are an implicated party. We do not believe that the DH should be allowed to direct or have any involvement into an investigation into themselves, other than giving evidence. The handling of this inquiry must be immediately transferred elsewhere“, we see a valid argument given, yet when we would receive testimonies from people, like the ministers in those days, how many would be aware and comprehend the data issues that were not even decently comprehended in those days? Because these data issues are clearly part of all of these events, they will become clear towards the end of the article.

Now, be aware, I am not giving some kind of a free pass, or give rise that those who got the bad blood should be trivialised or ignored or even set to a side track, I am merely calling for a good and clear path that allows for complete comprehension and for the subsequent need of actual prevention. You see, what happens today might be better, yet can we prevent this from ever happening again? In this I have to make a side step to a non-journalistic source, we see (at https://www.factor8scandal.uk/about-factor/), “It is often misreported that these treatments were “Blood Transfusions”. Not True. Factor was a processed pharmaceutical product (pictured)“, so when I see the Guardian making the same bloody mistake, as shown in the article, we see and should ask certain parties how they could remain in that same stance of utter criminal negligence (as I personally see it), but giving rise to intentional misrepresentation. When we see the quote (source: the Express) “Now, in the face of overwhelming evidence presented by Andy Burnham last month, Theresa May has still not ordered an inquiry into the culture, practice and ethics of the Department of Health in dealing with this human tragedy” with the added realisation that we have to face that the actual culprit was not merely data, yet the existence of the cause through Factor VIII is not even mentioned, the Guardian steered clear via the quote “A recent parliamentary report found around 7,500 patients were infected by imported blood products from commercial organisations in the US” and in addition the quote “The UK Public Health Minister, Caroline Flint, has said: “We are aware that during the 1970s and 80s blood products were sourced from US prisoners” and the UK Haemophilia Society has called for a Public Inquiry. The UK Government maintains that the Government of the day had acted in good faith and without the blood products many patients would have died. In a letter to Lord Jenkin of Roding the Chief Executive of the National Health Service (NHS) informed Lord Jenkin that most files on contaminated NHS blood products which infected people with HIV and hepatitis C had unfortunately been destroyed ‘in error’. Fortunately, copies that were taken by legal entities in the UK at the time of previous litigation may mean the documentation can be retrieved and consequently assessed“, the sources the Express and the New York Times, we see for example the quote “Cutter Biological, introduced its safer medicine in late February 1984 as evidence mounted that the earlier version was infecting hemophiliacs with H.I.V. Yet for over a year, the company continued to sell the old medicine overseas, prompting a United States regulator to accuse Cutter of breaking its promise to stop selling the product” with the additional “Cutter officials were trying to avoid being stuck with large stores of a product that was proving increasingly unmarketable in the United States and Europe“, so how often did we see the mention of ‘Cutter Biological‘ (or Bayer pharmaceuticals for that matter)?

In the entire Arkansas Prison part we see that there are connections to cases of criminal negligence in Canada 2006 (where Canadian Red Cross fell on their sword), Japan 2007 as well as the visibility of the entire issue at Slamdance 2005, so as we see the rise of inquiries, how many have truly investigated the links between these people and how the connection to Bayer pharmaceuticals kept them out of harm’s way for the longest of times? How many people at Cutter Biological have not merely been investigated, but also indicted for murder? When we get ‘trying to avoid being stuck with large stores of a non-sellable product‘ we get the proven issue of intent. Because there are no recall and destroy actions, were there?

Even as we see a batch of sources giving us parts in this year, the entire visibility from 2005-2017 shows that the media has given no, or at best dubious visibility in all this, even yesterday’s article at the Guardian shows the continuation of bad visibility with the blood packs. So when we look (at http://www.kpbs.org/news/2011/aug/04/bad-blood-cautionary-tale/), and see the August 2011 part with “This “miracle” product was considered so beneficial that it was approved by the FDA despite known risks of viral contamination, including the near-certainty of infection with hepatitis“, we wonder how the wonder drug got to be or remain on the market. Now, there is a fair defence that some issues would be unknown or even untested to some degree, yet the ‘the near-certainty of infection with hepatitis‘ should give rise to all kinds of questions and it is not the first time that the FDA is seen to approve bad medication, which gives rise to the question why they are allowed to be the cartel of approval as big bucks is the gateway through their door. When we consider the additional quote of “By the time the medication was pulled from the market in 1985, 10,000 hemophiliacs had been infected with HIV, and 15,000 with hepatitis C; causing the worst medical disaster in U.S. history“, how come that it took 6 years for this to get decent amounts of traction within the UK government.

What happened to all that data?

You see, this is not merely about the events, I believe that if any old systems (a very unlikely reality) could be retrieved, how long would it take for digital forensics to find in the erased (not overwritten) records to show that certain matters could have been found in these very early records? Especially when we consider the infancy of data integrity and data cleaning, what other evidence could have surfaced? In all this, no matter how we dig in places like the BBC and other places, we see a massive lack of visibility on Bayer Pharmaceuticals. So when we look (at http://pharma.bayer.com/en/innovation-partnering/research-focus/hemophilia/), we might accept that the product has been corrected, yet their own site gives us “the missing clotting factor is replaced by a ‘recombinant factor’, which is manufactured using genetically modified mammalian cells. When administered intravenously, the recombinant factor helps to stop acute bleeding at an early stage or may prevent it altogether by regular prophylaxis. The recombinant factor VIII developed by Bayer for treating hemophilia A was one of the first products of its kind. It was launched in 1993“, so was this solution based on the evolution of getting thousands of people killed? the sideline “Since the mid-1970s Bayer has engaged in research in haematology focusing its efforts on developing new treatment options for the therapy of haemophilia A (factor VIII deficiency)“, so in all this, whether valid or not (depending on the link between Bayer Pharmaceuticals UK and Cutter Biological. the mere visibility on these two missing in all the mentions, is a matter of additional questions, especially as Bayer became the owner of it all between 1974 and 1978, which puts them clearly in the required crosshairs of certain activities like depleting bad medication stockpiles. Again, not too much being shown in the several news articles I was reading. When we see the Independent, we see ‘Health Secretary Jeremy Hunt to meet victims’ families before form of inquiry is decided‘, in this case it seems a little far-fetched that the presentation by Andy Burham (as given in the Express) would not have been enough to give an immediate green light to all this. Even as the independent is hiding behind blood bags as well, they do give the caption of Factor VIII with it, yet we see no mention of Bayer or Cutter, yet there is a mention of ‘prisoners‘ and the fact that their blood was paid for, yet no mention of the events in Canada and Japan, two instances that gives rise to an immediate and essential need for an inquiry.

In all this, we need to realise that no matter how deep the inquiry goes, the amount of evidence that could have been wiped or set asunder from the eyes of the people by the administrative gods of Information Technology as it was between 1975 and 1989, there is a dangerous situation. One that came unwillingly through the evolution of data systems, one that seems to be the intent of the reporting media as we see the utter absence of Bayer Pharmaceuticals in all of this, whilst there is a growing pool of evidence through documentaries, ad other sources that seem to lose visibility as the media is growing a view of presentations that are skating on the subject, yet until the inquiry becomes an official part we see a lot less than the people are entitled to, so is that another instance of the ethical chapters of the Leveson inquiry? And when this inquiry becomes an actuality, what questions will we see absent or sidelined?

All this gets me back to the Guardian article as we see “The threat to the inquiry comes only a week after May ordered a full investigation into how contaminated blood transfusions infected thousands of people with hepatitis C and HIV“, so how about the events from 2005 onwards? Were they mere pharmaceutical chopped liver? In the linked ‘Theresa May orders contaminated blood scandal inquiry‘ article there was no mention of Factor VIII, Bayer (pharmaceuticals) or Cutter (biological). It seems that we need to give rise that ethical issues have been trampled on, so a mention of “a criminal cover-up on an industrial scale” is not a mere indication; it is an almost given certainty. In all that, as the inquiry will get traction, I wonder how both the current and past governments will be adamant to avoid skating into certain realms of the events (like naming the commercial players), and when we realise this, will there be any justice to the victims, especially when the data systems of those days have been out of time for some time and the legislation on legacy data is pretty much non-existent. When the end balance is given, in (as I personally see it) a requirement of considering to replace whatever Bayer Pharmaceuticals is supplying the UK NHS, I will wonder who will be required to fall on the virtual sword of non-accountability. The mere reason being that when we see (at http://www.annualreport2016.bayer.com/) that Bayer is approaching a revenue of 47 billion (€ 46,769M) in 2016, should there not be a consequence of the players ‘depleting unsellable stock‘ at the expense of thousands of lives? This is another matter that is interestingly absent from the entire UK press cycles. And this is not me just speculating, the sources give clear absence whilst the FDA reports show other levels of failing, it seems that some players forget that lots of data is now globally available which seems to fuel the mention of ‘criminal negligence‘.

So you have a nice day and when you see the next news cycle with bad blood, showing blood bags and making no mention of Factor VIII, or the pharmaceutical players clearly connected to all this, you just wonder who is doing the job for these journalists, because the data as it needed to be shown, was easily found in the most open of UK and US governmental places.

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science