Tag Archives: the Guardian

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.



Leave a comment

Filed under Finance, IT, Media, Science

The Sleeping Watchdog

Patrick Wintour, the Guardian’s diplomatic editor is giving us merely a few hours ago [update: yesterday 13 minutes before an idiot with a bulldozer went through the fiber optical cable] before the news on OPCW. So when we see “a special two-day session in late June in response to Britain’s call to hand the body new powers to attribute responsibility for chemical weapons attacks“, what does that mean? You see, the setting is not complex, it should be smooth sailing, but is it?

Let’s take a look at the evidence, most of it from the Guardian. I raised issues which started as early as March 2018 with ‘The Red flags‘ (at https://lawlordtobe.com/2018/03/27/the-red-flags/), we see no evidence on Russian handling, we see no evidence on the delivery, merely a rumour that ‘More than 130 people could have been exposed‘ (‘could’ being the operative word) and in the end, no fatalities, the target survived. Whilst a mere silenced 9mm solution from a person doing a favour for Russian businessman Sergey Yevgenyevich Naryshkin would have done the trick with no fuss at all. And in Russia, you can’t even perceive the line of Russians hoping to be owed a favour by Sergey Yevgenyevich Naryshkin. In addition, all these months later we still have not seen any conclusive evidence of ANY kind that it was a Russian state based event. Mere emotional speculations on ‘could’ ‘might be‘ as well as ‘expected‘. So where do we stand?

A little later in April, we see in the article ‘Evidence by candlelight‘ (at https://lawlordtobe.com/2018/04/04/evidence-by-candlelight/), the mere conclusion ‘Porton Down experts unable to verify precise source of novichok‘, so not only could the experts not determine the source (the delivery device), it also gives weight to the lack of evidence that it was a Russian thing. Now, I am not saying that it was NOT Russia, we merely cannot prove that it was. In addition, I was able to find several references to a Russian case involving Ivan Kivelidi and Leonard Rink in 1995, whilst the so called humongous expert named Vil Mirzayanov stated ““You need a very high-qualified professional scientist,” he continued. “Because it is dangerous stuff. Extremely dangerous. You can kill yourself. First of all you have to have a very good shield, a very particular container. And after that to weaponize it – weaponize it is impossible without high technical equipment. It’s impossible to imagine.”” I do not oppose that, because it sounds all reasonable and my extended brain cells on Chemical weapons have not been downloaded yet (I am still on my first coffee). Yet in all this the OPCW setting was in 2013: “Regarding new toxic chemicals not listed in the Annex on Chemicals but which may nevertheless pose a risk to the Convention, the SAB makes reference to “Novichoks”. The name “Novichok” is used in a publication of a former Soviet scientist who reported investigating a new class of nerve agents suitable for use as binary chemical weapons. The SAB states that it has insufficient information to comment on the existence or properties of “Novichoks”“, I can accept that the OPCW is not fully up to speed, yet the information from 1995, 16 years earlier was the setting: ““In 1995, a Russian banking magnate called Ivan Kivelidi and his secretary died from organ failure after being poisoned with a military grade toxin found on an office telephone. A closed trial found that his business partner had obtained the substance via intermediaries from an employee of a state chemical research institute known as GosNIIOKhT, which was involved in the development of Novichoks“, which we got from the Standard (at https://www.independent.co.uk/news/uk/crime/uk-russia-nerve-agent-attack-spy-poisoning-sergei-skripal-salisbury-accusations-evidence-explanation-a8258911.html), so when you realise these settings, we need to realise that the OPCW is flawed on a few levels. It is not the statement “the OPCW has found its methods under attack from Russia and other supporters of the Syrian regime“, the mere fact that we see in regarding of Novichoks implies that the OPCW is a little out of their depth, their own documentation implies this clearly (as seen in the previous blog articles), I attached one of them in the article ‘Something for the Silver Screen?‘ (at https://lawlordtobe.com/2018/03/17/something-for-the-silver-screen/), so a mere three months ago, there has been several documents all out in the open that gives light to a flawed OPCW, so even as we accept ‘chemist says non-state actor couldn’t carry out attack‘, the fact that it did not result in fatalities gives us that it actually might be a non-state action, it might not be an action by any ‘friend’ of Sergey Yevgenyevich Naryshkin or Igor Valentinovich Korobov. These people cannot smile, not even on their official photos. No sense of humour at all, and they tend to be the people who have a very non-complementary view on failure. So we are confronted not merely with the danger of Novichoks, or with the fact that it very likely in non-state hands. The fact that there is no defence, not the issue of the non-fatalities, but the fact that the source could not be determined, is the dangerous setting and even as we hold nothing against Porton Down, the 16 year gap shown by the OPCW implies that the experts relied on by places like Porton Down are not available, which changes the landscape by a lot and whilst many will wonder how that matters. That evidence could be seen as important when we reconsider the chemical attacks in Syria on 22nd August 2011, so not only did the US sit on their hands, it is now not entirely impossible that they did not have the skills at their disposal to get anything done. Even as a compound like Sarin is no longer really a mystery, the setting we saw then, gives us the other part. With the Associated Press giving us at the time “anonymous US intelligence officials as saying that the evidence presented in the report linking Assad to the attack was “not a slam dunk.”” Is one part, the fact that all the satellites looking there and there is no way to identify the actual culprit is an important part. You see we could accept that the Syrian government was behind this, but there is no evidence, no irrefutable fact was ever given. That implies that when it comes to delivery systems, there is a clear gap, not merely for Novichoks, making the entire setting a lot less useful. In this the website of the OPCW (at https://www.opcw.org/special-sections/syria-and-the-opcw/) is partial evidence. When we see “A total of 14 companies submitted bids to undertake this work and, following technical and commercial evaluation of the bids, the preferred bidders were announced on 14th February 2014. Contracts were signed with two companies – Ekokem Oy Ab from Finland, and Veolia Environmental Services Technical Solutions in the USA” in light of the timeline, implies that here was no real setting and one was implemented after Ghouta, I find that part debatable and not reassuring. In addition, the fact finding mission was not set up until 2014, this is an issue, because one should have been set up on the 23rd August 2011, even as nothing would have been available and the status would have been idle (for very valid reasons), the fact that the fact finding mission was not set up until 2014, gives light to even longer delays. In addition, we see a part that has no blame on the OPCW, the agreement “Decides further that the Secretariat shall: inspect not later than 30 days after the adoption of this decision, all facilities contained in the list referred to in paragraph 1(a) above;“, perfect legal (read: diplomacy driven) talk giving the user of those facilities 30 days to get rid of the evidence. Now, there is no blame on the OPCW in any way, yet were these places not monitored by satellites? Would the visibility of increased traffic and activities not given light to the possible culprit in this all? And when we look at the paragraph 1(a) part and we see: “the location of all of its chemical weapons, chemical weapons storage facilities, chemical weapons production facilities, including mixing and filling facilities, and chemical weapons research and development facilities, providing specific geographic coordinates;“, is there not the decent chance (if the Syrian government was involved, that ‘all locations‘ would be seen as ‘N-1‘, with the actual used fabrication location used conveniently missing from the list? #JustSaying

It seems to me that if this setting is to be more (professional is the wrong word) capable to be effective, a very different setting is required. You see, that setting becomes very astute when we realise that non-state actors are currently on the table, the danger that a lone wolf getting creative is every bit as important to the equation. the OPCW seems to be in a ‘after the fact‘ setting, whilst the intelligence community needs an expert that is supportive towards their own experts in a pro-active setting, not merely the data mining part, but the option to see flagged chemicals that could be part of a binary toxic setting, requires a different data scope and here we see the dangers when we realise that the ‘after the fact‘ setting with a 16 year gap missing the danger is something that is expensive and equally, useless would be the wrong word, but ‘effective’ it is not, too much evidence points at that. For that we need to see that their mission statement is to ‘implement the provisions of the Chemical Weapons Convention (CWC) in order to achieve the OPCW’s vision of a world that is free of chemical weapons and of the threat of their use‘, yet when we look at the CWC charter we see: ‘The Convention aims to eliminate an entire category of weapons of mass destruction by prohibiting the development, production, acquisition, stockpiling, retention, transfer or use of chemical weapons by States Parties. States Parties, in turn, must take the steps necessary to enforce that prohibition in respect of persons (natural or legal) within their jurisdiction‘, which requires a pro-active setting and that is definitely lacking from the OPCW, raising the issue whether their mandate is one of failure. That requires a very different scope, different budgets and above all a very different set of resources available to the OPCW, or whoever replaces the OPCW, because that part of the discussion is definitely not off the table for now. The Salisbury event and all the available data seems to point in that direction.


Leave a comment

Filed under Media, Politics, Science

Bang Bang Common Sense

Jason Wilson brought to light an article (at https://www.theguardian.com/world/2018/jun/03/us-senate-hopeful-washington-joey-gibson) that made me think. You see, I am pragmatic and pro guns, I never hid that. Yet in equal measure I have an issue with people bringing their guns to a night club, especially when they are not members of organised crime. So, when you do a dancing backflip and accidently shoot a person as you pick up your gun, FBI agent or not, it raises questions.

This is not me having a go at that officer, there might be a very valid reason for him to have had his piece on him, but making backflips (impressive as it may be) was not the brightest thought to be having. Yet that was not what this will be about. You see, Joey Gibson, the far right Republican Senate candidate is advocating what I call a scenario too dangerous for words. With: “That’s why we’re doing it, there’s people dying. Gun-free zones disgust me because we’re not protecting the kids on the campus. People look at it backwards“, the dangerous precedent is set. Those who do not know, or have proper skill to counter an armed attack end up being dead and handing additional weapons and ammunition to the attackers. I think we all realise that the setting of having an armed response team in any University might not be the worst idea. In that we need to realise that there are trained professionals from the Army, Marines, Navy and police that are now retired that might be more than willing to be there, making a few dollars and being there when there is real trouble. In the first hour it could lower or even prevent fatalities. Making the University a no gun-free zone, letting anyone have a go is not just stupid; it is very dangerous, that approach will increase casualties by a lot. The moment these extreme thinking or mental health cases realise that the university have additional guns and ammunition up for grabs, they might just take the leap with one gun and one clip, which is a realistic and serious danger. Until you have shot a person, or are in the second to shoot someone, that is when you realise that you have what it takes, or not and that second group will be arming the attackers. The second consideration is weapon skill. You might have shot at these nice targets on the range, or puppets standing still, but once they are moving, being accurate is something that would become too unpredictable. So here I am, as a virtual supporter of the NRA stating that this setting is way too dangerous to consider. I never had any kids, but I realise the need to protect the next generation and letting everyone armed on the university makes the danger worse, not safer.

Yet the issue is larger, you see Joey Gibson is not some right extremist. As a Japanese American (or is that American Japanese?) we see that he denounces white supremacists, advocates peaceful actions and is outspokenly anti-antifa (anti-fascist movement). Most of this was seen last year (at https://www.washingtontimes.com/news/2017/sep/3/patriot-prayer-free-speech-group-urges-supporters-/).  It was Valerie Richardson that gave the goods in the Washington Times. The issue becomes more murky when we see “So many people were so disgusted about how they treated us. The liberals were literally standing around with peace signs and love signs while antifa is just yelling and cussing and beating the crap out of us and pepper-spraying us“, which gets us to the question why would anyone pepper spray a person advocating peace? Even as the article gives us a lot, I think we are missing out, a better in depth article by a writer (Valerie or someone else) who would actually to an in depth view of Joey Gibson, especially if that person is running for the senate. It seems that the one person giving a decent and perhaps the most valid view was Daveed Walzer Panadero who gave us “urging antifa to stop trying to silence Mr. Gibson and “get that man a podium and a mike.”“, that makes sense, because if we do not know what he stands for, you cannot make up your registered voting mind.

Yet as we go back to the article, where exactly is he plotting? So far he seems to be out in the open. Yet I also acknowledge the setting we see with: “Speakers with handguns or rifles addressed a small crowd in McGraw Square, at the heart of a busy shopping district. At the other side of the square, around 10 members of an armed leftist group, the Puget Sound John Brown Gun Club, stood watching for what their spokesman called a “known white supremacist element”. They carried AR-15s and side arms“, it is a dangerous setting! You see, it only takes one person to lose his/her cool and we end up in a setting where 20 rifles will be used and there is actually zero chance of innocent bystanders not getting hurt. As a pro gun person, I recognise that danger and I see levels or irresponsibility that is way too high, because the trial that follows will all be about ‘the blame game’ and there will be no one around being able to tell who was the first one shooting, in all likelihood that person would be deceased including optionally dozens of others.

The two sided knife is that gun banning will not work, not ever (those who say it will in America are plain nuts). The open gun policy is equally dangerous and until we recognise the fact that guns do not kill people, people kill people this situation will not get better. As I wrote before, until the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) gets a real incentive of resources and funds, this situation will never ever improve. In that regard, Joey Gibson can preach and pray all he likes, yet the setting of no gun-free zones are just too dangerous, that alone might defeat his bid for the Senate or Congress. You see, as I discussed last February with ‘United they grow‘ (at https://lawlordtobe.com/2018/02/22/united-they-grow/), as well as ‘In continuation of views‘ (at https://lawlordtobe.com/2018/02/23/in-continuation-of-views/), we see that the issue was not the NRA, in a much larger setting the issue is with the ATF and the media, as well as the woolly people proclaiming that the NRA is killing their children is the massive issue that the ATF cannot get anything done due to a lack of funds and resources. The largest setting that can do something is not allowed to do anything and the people remain ignorant, deaf and blind to that part of the equation, which implies that not only are things not changing for the better, the view that Joey Gibson is giving us is that no actual progress will be possible adding to the no gun-free zones debacle, it is just too dangerous. Recognising that one element solves a lot of issues and could make changes for the better, yet the ATF is just bound by a budget that is 10 years old, resources closer to 15 years outdated and an absence of clear leadership that goes back from before the Obama administration, so why would progress ever be made?

So by the time we get to the explosives directive of the ATF, we might wonder how many buildings in New York and Los Angeles are still standing at present. Is it not interesting that we are kept in the dark on that setting?

Yet, when we get back to Joey Gibson, there is one side that most were not aware of and it is awesome that Jason Wilson gives us that view. With “Washington is seen as a Democratic state, but that impression conceals a deep divide between urban and rural, west and east, characteristic of west coast states. Money, power and population are centred on Seattle, which is often resented by rural conservatives in the state’s eastern half. Gibson’s rhetoric has always been stridently critical of the liberal cities. In Seattle, he said the city “despises patriots” and “will spit in your face for loving the constitution”“, which most (including me would not have been aware of), so when we consider King and Pierce county to represent 1/3 of the entire state, we see another picture entirely, oh and by the way these two are overwhelmingly Democratic. Even as we might accept Sightline on ‘follow the money‘ (at http://www.sightline.org/2016/10/11/following-the-money-in-washington-state-elections-part-1/), as it shows us issues on campaign funding, it does not give us the influence that the wealthy have in some districts in the east, the results say that this is not the case, yet there is an issue when we look at the map (at https://www.nytimes.com/elections/results/washington). The speculated issue is that rural Washington State is left to fend for itself. We can understand that the logic requires the funds to be set on the coastal area where the cities are, but when we see the Yakima herald (at http://www.yakimaherald.com/news/local/with-percent-in-program-food-stamp-cuts-could-hit-yakima/article_c3fe8d18-429e-11e7-9396-67c7dd7bbd33.html), we see that the cuts are rougher and still in place. That sets the stage for people like Joey Gibson to take the stage and his view does not imply that he is extreme in his thinking, yet the setting of inequality is a much larger issue and it does set the stage that tends to lean to extreme right thinking. Anti-government thinking in a stage where places like Seattle, Vancouver and Bellingham are taken care of, whilst the rest is largely ignored is not a healthy way to move forward. The slightline view on corporate sponsoring merely increases the issue on a view of inequality. That is where (as I personally see it) the right wing foundation comes from and even as it implies that Joey Gibson has no real chance. He is up against Maria Cantwell, who has shown to be pro-business, a successful job creator and stopped Artic drilling which makes her the additional sweetheart of the green parties. As a resident of the Snohomish county and being pro-business she has funding from King, Thurston and Clark County on her side which is almost a third of her state. The pro-business part should also give her Bellingham and if done correctly with the right agreements should deliver Spokane to her and at that point it is pretty much game over for Joey Gibson. So even as we see ‘Joey Gibson and plots’, the setting in Washington State is not ideal for him, apart from the mere common sense that his idea is not one that will work, there will be decreased safety from his gunpoint of view and that will cost him votes as well, especially when one piece of evidence is shown that children would be endangered from his viewpoint, an issue that will come up, with a certainty of close to 100%.

I like the approach he took. Not from the pro-gun point, but from the mere common sense that the installation of no gun-free zones is more than likely to be the start of more casualties. You see, the firearms death rate is low in Washington State and in the lowest tier that is 3.4-9 per 100,000. Washington State is exactly on the 9 border with 686 casualties. It only takes one event to put them in the 9.1-11.0 per 100,000 which takes the entire state to a higher tier, so one event and it is game over for Joey Gibson (source: CDC). In addition the Washington State health services also give us that 2008-2010 data gives 585 firearms casualties, whilst only 119 were homicide, 9 were unintentional and the largest group was suicide with 455. In that regard gun banning would not have any significant change, because when there is no gun, there will still be the opportunity for razors, sleeping tablets, a bathtub and the three in combination with nice soothing filled bathtub. So that will still happen one way or the other, considering that it is on par with motor vehicle crashes (both 8.6 per 100,000) gives additional rise to gun banning not making a difference in the state. Yet the Joey Gibson change is very likely to impact that in a very negative way, where he ends up defeating himself. The direct solution is also seen here, if the ATF had done their job (with proper resources and funding available) there is every chance that the suicide rate would have been positively influences and as that side is 77% of the fire arms fatalities, a chunk of it prevented as assistance to overcome mental hardship was given. Is that not an interesting overlooked fact? And it is not the only one, there are plenty more where that came from, fatalities all preventable by giving the ATF the right tools, resources and staff members.


Leave a comment

Filed under Finance, Media, Politics

Data illusions

Yesterday was an interesting day for a few reasons; one of the primary reasons was an opinion piece in the Guardian by Jay Watts (@Shrink_at_Large). Like many article I considered to be in opposition, yet when I reread it, this piece has all kinds of hidden gems and I had to ponder a few items for an hour or so. I love that! Any piece, article or opinion that makes me rethink my position is a piece well worth reading. So this piece called ‘Supermarkets spy on them now‘ (at https://www.theguardian.com/commentisfree/2018/may/31/benefits-claimants-fear-supermarkets-spy-poor-disabled) has several sides that require us to think and rethink issues. As we see a quote like “some are happy to brush this off as no big deal” we identify with too many parts; to me and to many it is just that, no big deal, but behind the issues are secondary issues that are ignored by the masses (en mass as we might giggle), yet the truth is far from nice.

So what do we see in the first as primary and what is behind it as secondary? In the first we see the premise “if a patient with a diagnosis of paranoid schizophrenia told you that they were being watched by the Department for Work and Pensions (DWP), most mental health practitioners would presume this to be a sign of illness. This is not the case today.” It is not whether this is true or not, it is not a case of watching, being a watcher or even watching the watcher. It is what happens behind it all. So, when we recollect that dead dropped donkey called Cambridge Analytics, which was all based on interacting and engaging on fear. Consider what IBM and Google are able to do now through machine learning. This we see in an addition to a book from O’Reilly called ‘The Evolution of Analytics‘ by Patrick Hall, Wen Phan, and Katie Whitson. Here we see the direct impact of programs like SAS (Statistical Analysis System) in the application of machine learning, we see this on page 3 of Machine Learning in the Analytic Landscape (not a page 3 of the Sun by the way). Here we see for the government “Pattern recognition in images and videos enhance security and threat detection while the examination of transactions can spot healthcare fraud“, you might think it is no big deal. Yet you are forgetting that it is more than the so called implied ‘healthcare fraud‘. It is the abused setting of fraud in general and the eagerly awaited setting for ‘miscommunication’ whilst the people en mass are now set in a wrongly categorised world, a world where assumption takes control and scores of people are now pushed into the defence of their actions, an optional change towards ‘guilty until proven innocent’ whilst those making assumptions are clueless on many occasions, now are in an additional setting where they believe that they know exactly what they are doing. We have seen these kinds of bungles that impacted thousands of people in the UK and Australia. It seems that Canada has a better system where every letter with the content: ‘I am sorry to inform you, but it seems that your system made an error‘ tends to overthrow such assumptions (Yay for Canada today). So when we are confronted with: “The level of scrutiny all benefits claimants feel under is so brutal that it is no surprise that supermarket giant Sainsbury’s has a policy to share CCTV “where we are asked to do so by a public or regulatory authority such as the police or the Department for Work and Pensions”“, it is not merely the policy of Sainsbury, it is what places like the Department for Work and Pensions are going to do with machine learning and their version of classifications, whilst the foundation of true fraud is often not clear to them, so you want to set a system without clarity and hope that the machine will constitute learning through machine learning? It can never work, that evidence is seen as the initial classification of any person in a fluidic setting is altering on the best of conditions. Such systems are not able to deal with the chaotic life of any person not in a clear lifestyle cycle and people on pensions (trying to merely get by) as well as those who are physically or mentally unhealthy. These are merely three categories where all kind of cycles of chaos tend to intervene with their daily life. Those are now shown to be optionally targeted with not just a flawed system, but with a system where the transient workforce using those methods are unclear on what needs to be done as the need changes with every political administration. A system under such levels of basic change is too dangerous to get linked to any kind of machine learning. I believe that Jay Watts is not misinforming us; I feel that even the writer here has not yet touched on many unspoken dangers. There is no fault here by the one who gave us the opinion piece, I personally believe that the quote “they become imprisoned in their homes or in a mental state wherein they feel they are constantly being accused of being fraudulent or worthless” is incomplete, yet the setting I refer to is mentioned at the very end. You see, I believe that such systems will push suicide rates to an all-time high. I do not agree with “be too kind a phrase to describe what the Tories have done and are doing to claimants. It is worse than that: it is the post-apocalyptic bleakness of poverty combined with the persecution and terror of constantly feeling watched and accused“. I believe it to be wrong because this is a flaw on both sides of the political aisle. Their state of inaction for decades forced the issue out and as the NHS is out of money and is not getting any money the current administration is trying to find cash in any way that they can, because the coffers are empty, which now gets us to a BBC article from last year.

At http://www.bbc.com/news/election-2017-39980793, we saw “A survey in 2013 by Ipsos Mori suggested people believed that £24 out of every £100 spent on benefits was fraudulently claimed. What do you think – too high, too low?
Want to know the real answer? It is £1.10 for every £100
“. That is the dangerous political setting as we should see it; the assumption and believe that 24% is set to fraud when it is more realistic that 1% might be the actual figure. Let’s not be coy about it, because out of £172.3bn a 1% amount still remains a serious amount of cash, yet when you set it against the percentage of the UK population the amount becomes a mere £25 per person, it merely takes one prescription to get to that amount, one missed on the government side and one wrongly entered on the patients side and we are there. Yet in all that, how many prescriptions did you the reader require in the last year alone? When we get to that nitty gritty level we are confronted with the task where machine learning will not offer anything but additional resources to double check every claimant and offense. Now, we should all agree that machine learning and analyses will help in many ways, yet when it comes to ‘Claimants often feel unable to go out, attempt voluntary work or enjoy time with family for fear this will be used against them‘ we are confronted with a new level of data and when we merely look at the fear of voluntary work or being with family we need to consider what we have become. So in all this we see a rightful investment into a system that in the long run will help automate all kinds of things and help us to see where governments failed their social systems, we see a system that costs hundreds of millions, to look into an optional 1% loss, which at 10% of the losses might make perfect sense. Yet these systems are flawed from the very moment they are implemented because the setting is not rational, not realistic and in the end will bring more costs than any have considered from day one. So in the setting of finding ways to justify a 2015 ‘The Tories’ £12bn of welfare cuts could come back to haunt them‘, will not merely fail, it will add a £1 billion in costs of hardware, software and resources, whilst not getting the £12 billion in workable cutbacks, where exactly was the logic in that?

So when we are looking at the George Orwell edition of edition of ‘Twenty Eighteen‘, we all laugh and think it is no great deal, but the danger is actually two fold. The first I used and taught to students which gets us the loss of choice.

The setting is that a supermarket needs to satisfy the need of the customers and the survey they have they will keep items in a category (lollies for example) that are rated ‘fantastic value for money‘ and ‘great value for money‘, or the top 25th percentile of the products, whatever is the largest. So in the setting with 5,000 responses, the issue was that the 25th percentile now also included ‘decent value for money‘. So we get a setting where an additional 35 articles were kept in stock for the lollies category. This was the setting where I showed the value of what is known as User Missing Values. There were 423 people who had no opinion on lollies, who for whatever reason never bought those articles, This led to removing them from consideration, a choice merely based on actual responses; now the same situation gave us the 4,577 people gave us that the top 25th percentile only had ‘fantastic value for money‘ and ‘great value for money‘ and within that setting 35 articles were removed from that supermarket. Here we see the danger! What about those people who really loved one of those 35 articles, yet were not interviewed? The average supermarket does not have 5,000 visitors, it has depending on the location up to a thousand a day, more important, when we add a few elements and it is no longer about supermarkets, but government institutions and in addition it is not about lollies but Fraud classification? When we are set in a category of ‘Most likely to commit Fraud‘ and ‘Very likely to commit Fraud‘, whilst those people with a job and bankers are not included into the equation? So we get a diminished setting of Fraud from the very beginning.

Hold Stop!

What did I just say? Well, there is method to my madness. Two sources, the first called Slashdot.org (no idea who they were), gave us a reference to a 2009 book called ‘Insidious: How Trusted Employees Steal Millions and Why It’s So Hard for Banks to Stop Them‘ by B. C. Krishna and Shirley Inscoe (ISBN-13: 978-0982527207). Here we see “The financial crisis appears to be exacerbating fraud by bank employees: a new survey found that 72 percent of financial institutions say that in the last 12 months they have experienced a case of data theft by one of their workers“. Now, it is important to realise that I have no idea how reliable these numbers are, yet the book was published, so there will be a political player using this at some stage. This already tumbles to academic reliability of Fraud in general, now for an actual reliable source we see KPMG, who gave us last year “KPMG survey reveals surge in fraud in Australia“, with “For the period April 2016 to September 2016, the total value of frauds rose by 16 percent to a total of $442m, from $381m in the previous six month period” we see number, yet it is based on a survey and how reliable were those giving their view? How much was assumption, unrecognised numbers and based on ‘forecasted increases‘ that were not met? That issue was clearly brought to light by the Sydney Morning Herald in 2011 (at https://www.smh.com.au/technology/piracy-are-we-being-conned-20110322-1c4cs.html), where we see: “the Australian Content Industry Group (ACIG), released new statistics to The Age, which claimed piracy was costing Australian content industries $900 million a year and 8000 jobs“, yet the issue is not merely the numbers given, the larger issue is “the report, which is just 12 pages long, is fundamentally flawed. It takes a model provided by an earlier European piracy study (which itself has been thoroughly debunked) and attempts to shoe-horn in extrapolated Australian figures that are at best highly questionable and at worst just made up“, so the claim “4.7 million Australian internet users engaged in illegal downloading and this was set to increase to 8 million by 2016. By that time, the claimed losses to piracy would jump to $5.2 billion a year and 40,000 jobs” was a joke to say the least. There we see the issue of Fraud in another light, based on a different setting, the same model was used, and that is whilst I am more and more convinced that the European model was likely to be flawed as well (a small reference to the Dutch Buma/Stemra setting of 2007-2010). So not only are the models wrong, the entire exercise gives us something that was never going to be reliable in any way shape or form (personal speculation), so in this we now have the entire Machine learning, the political setting of Fraud as well as the speculated numbers involved, and what is ‘disregarded’ as Fraud. We will end up with a scenario where we get 70% false positives (a pure rough assumption on my side) in a collective where checking those numbers will never be realistic, and the moment the parameters are ‘leaked’ the actual fraudulent people will change their settings making detection of Fraud less and less likely.

How will this fix anything other than the revenue need of those selling machine learning? So when we look back at the chapter of Modern Applications of Machine Learning we see “Deploying machine learning models in real-time opens up opportunities to tackle safety issues, security threats, and financial risk immediately. Making these decisions usually involves embedding trained machine learning models into a streaming engine“, that is actually true, yet when we also consider “review some of the key organizational, data, infrastructure, modelling, and operational and production challenges that organizations must address to successfully incorporate machine learning into their analytic strategy“, the element of data and data quality is overlooked on several levels, making the entire setting, especially in light of the piece by Jay Watts a very dangerous one. So the full title, which is intentionally did not use in the beginning ‘No wonder people on benefits live in fear. Supermarkets spy on them now‘, is set wholly on the known and almost guaranteed premise that data quality and knowing that the players in this field are slightly too happy to generalise and trivialise the issue of data quality. The moment that comes to light and the implementers are held accountable for data quality is when all those now hyping machine learning, will change their tune instantly and give us all kinds of ‘party line‘ issues that they are not responsible for. Issues that I personally expect they did not really highlight when they were all about selling that system.

Until data cleaning and data vetting gets a much higher position in the analyses ladder, we are confronted with aggregated, weighted and ‘expected likelihood‘ generalisations and those who are ‘flagged’ via such systems will live in constant fear that their shallow way of life stops because a too high paid analyst stuffed up a weighting factor, condemning a few thousand people set to be tagged for all kind of reasons, not merely because they could be optionally part of a 1% that the government is trying to clamp down on, or was that 24%? We can believe the BBC, but can we believe their sources?

And if there is even a partial doubt on the BBC data, how unreliable are the aggregated government numbers?

Did I oversimplify the issue a little?



Leave a comment

Filed under Finance, IT, Media, Politics, Science

Be not stupid

There is an article in the Guardian. Now, we all agree that anyone has their own views, that has been a given for the longest of times, and those reading my blog know that I have a different view at times, yet for the most, I remained neutral and non-attacking to those with a different view, that’s how I roll.

Today is different, the article “‘Easy trap to fall into’: why video-game loot boxes need regulation” by Mattha Busby (@MatthaBusby) got to me. It is time for people to realise that when you are over 18, you are responsible for your actions. So I have, pretty much, no patience with any American, Reddit user or not, who gives us “a Reddit user who claims to have spent $10,000“. If you are that stupid, you should not be allowed to play video games.

The Setting

To comprehend my anger, you need to realise the setting we see here. You see, loot boxes are not new. This goes all the way back to 1991 when Richard Garfield created Magic, the gathering. I was not really on board in the beginning, but I played the game. The issues connect when you realise how the product was sold. There was a starter kit (which we call the basic game) it will have enough cards to start playing the game as well as the essential cards you need to play it. To get ahead in the game you need to get boosters. Here is where it gets interesting. Dozens of games are working on the principle that Richard Garfield founded. A booster would have 9-13 cards (depending on the game), It would have 1 (read: One) rare card (or better), 3 uncommon cards and the rest would be common cards. I had several of these games I played and in the end (after 20 boosters) it was merely about collecting the rare cards if you wanted a complete set. Some would not care about it and they could play the game. So this is not a new thing, so if you truly spend $10,000 you should not complain. If you have the money it is not an issue, if you did not, you are too stupid for words. In games it is not new either. Mass Effect 3, the best multiplayer game ever (my personal view) had loot boxes as well, I am pretty sure that they were the first. Yes, you could buy them, with money, or with Microsoft credit points. The third option was that you could gather points whilst playing (at the cost of $0) and use these gained points to buy loot boxes, the solution most people used. Over time you would end up with sensational goods to truly slice and dice the opponents, all gained through play time, no extra cash required.

So when I see places like Venture beat (and the Guardian of course) state issues like: “some people, policymakers, and regulators — including the gaming authorities in Belgium and Netherlands — that those card packs have are gambling“. I see these statements as moronic and I regard them as statements of false presentation. You see, that is not what it is about! When you see the attached picture, you see that these cards are sold EVERYWHERE. The issue is that the CCG card games are sold in the shops, which means that revenue is TAXED. The online sales are not and now, policymakers are all up in arms because they lost out on a non-taxable ‘$1.25 billion during its last quarter even without releasing a major new game‘, that is the real issue and they are now all acting in falsehood. So, when I see “I am currently $15,800 in debt. My wife no longer trusts me. My kids, who ask me why I am playing Final Fantasy all the time, will never understand how I selfishly spent money I should have been using for their activities“, as well as “he became addicted to buying in-game perks, which he later described as ‘digital garbage’“. I merely see people without discipline, without proper control. So without any regard for diplomacy I will call them junkies, plain and simple. Junkies who have no idea just how stupid they are. And, since when do we adjust policy for junkies? Since when are the 99% who hold themselves all plenty accountable, have the proper discipline to not overspend and some (like me) never considered loot boxes in a game like Shadow of War, now being held to account, to lessened gaming impact by junkies? Can anyone answer me this?

Now, we need to take into consideration one or two things. Are the FIFA18 loot boxes set in a similar light? That is the one place where (seemingly) FIFA is in the wrong. You see I have been searching to get any info on what is in a FIFA loot box, but there is no information given. I believe that this lack is actually an issue, yet that could be resolved in 24 hours if Electronic Arts would dedicate 1 page (considering it brings them $1.25 billion a quarter) on what is to be found in a loot box (Rare, Uncommon, Common). The second part that I cannot answer (because I am not a soccer fan) is whether the game allows loot boxes to be earned through playing and finally. Can the game be played without loot boxes? It seems like such a small alteration to make and especially when we see the fuss that is being made now. Some additional facts can be seen in Rolling Stone Magazine of all places (at https://www.rollingstone.com/glixel/features/loot-boxes-never-ending-games-and-always-paying-players-w511655). So now that we get a fuss from several nations, nations that have been all open and accepting on games like The Decipher CCG games Star Trek and Star Wars, Magic the Gathering, The Lord of the Rings, My Little Pony, Harry Potter, Pokémon, and that list goes on for some time. In that regard, they are all gambling and in my view, I feel certain that these so called politicians and lime light seekers will do absolutely NOTHING to get anything done because the cards are subject to VAT and the online stuff is lost taxable revenue. That is what I personally see as the foundation of a corrupt administration.

You see, the fact is that it is not gambling. You buy something that is in 3 categories, Rare, Uncommon and Common, you ALWAYS get this in a setting of 1 rare, 3 uncommon and 5 common, which card you get is not a given, it is random, but they will always get that setting. Let’s for example state that the loot box is $7, you get one $3 card, three $1 cards and five $0.20 cards, so how is that gambling? For Electronic Arts, until they update the website to give a precise definition might be in waters that are a little warmer, but that can be fixed by the end of the day. Perhaps they do have such a page, but Google did not find it.

In addition, Venture Beat gave us (at https://venturebeat.com/2018/05/08/ea-ceo-were-pushing-forward-with-loot-boxes-in-face-of-regulation/) “EA will have to convince policymakers around the world that it is doing enough and that its mechanics are not the same as the kinds of games you’d find in a casino“, which is easy as these policymakers did absolutely nothing to stop CCG’s like Pokémon and My Little Pony (truly games for minors), so we can stat that this was never about the loot box, it was about missed taxable revenue, a side that all the articles seemed to have left in the dark.

The Guardian has one additional gem. With: “A bill introduced in Minnesota last month would prohibit the sale of video games with loot boxes to under-18s and require a severe warning: “This game contains a gambling-like mechanism that may promote the development of a gaming disorder that increases the risk of harmful mental or physical health effects, and may expose the user to significant financial risk.”” Here I am in the middle. I think that Americans are not that bright at times, a point of view supported with the image of paper cups with the text ‘Caution Hot’ to avoid liability if some idiot burns their mouth; we know that sanity is out of the window. Yet the idea that there should be a loot box warning is perhaps not the worst idea. I think that EA could get ahead of the curve by clearly stating in a readable font size that ‘no loot boxes are needed to play the game‘, which is actually a more apt statement (and a true one) for Shadow of War, with FIFA18, I do not know. You see, this is a changed venue, when you can add a world player to your team the equation changes. Yet, does it make it more or less enjoyable? If I play NHL with my Capitals team and I get to add Mario Lemieux and Wayne Gretsky my chances to get the Stanley cup go up, yet is that a real win or is that cheating? That is of course the other side, the side that the game maker Ubisoft enabled in their Assassins Creed series. you could unlock weapons and gear for a mere $4, they clearly stated that the player would be able to unlock the options during the game, yet some people are not really gamers, mere players with a short attention span and they want the hardware upfront. Enter the Civil war with an Uzi and a Remington, to merely coin a setting. Are they gamers, or are they cheaters? It is a fair question and there is no real answer. Some say that the game allowed them to do this, which is fair and some say, you need to earn the kills you make. We can go to it from any direction, yet when we are confronted with mere junkies going on with spending $15,800, adding to a $69 game, we are confronted with people so stupid, it makes me wonder how he got his wife pregnant in the first place. If the given debt $15,800 is true then there should be a paper trail. In that regard I am all for the fact that there should be a spending limit of perhaps $500 a month, a random number but the fact that there is a limit to spend is not the worst idea. In the end, you have to pay for the stuff, so have a barrier at that point could have imposed a limit on the spending. In addition, we can point at the quote “how I selfishly spent money I should have been using for their activities” and how that is the response of any junk to make, ‘Oh! I am so sorry‘, especially after the junk got his/her fix.

The Guardian gives in addition an actual interesting side: “Hawaiian congressman Chris Lee said “are specifically designed to exploit and manipulate the addictive nature of human psychology”“, it is a fair point to make. Are ‘game completionists’ OCD people? Can the loot box be a vessel of wrongdoing? It might, yet that still does not make it gambling or illegal, which gets us to the Minnesota setting of a warning on the box. It is an interesting option and I think that most game makers would not oppose that, because you basically are not keeping loot boxes a secret and that might be a fair call to make, as long as we are not going overboard with messages like: “This game is a digital product, it requires a working computer to install and operate“, because at that point we have gone overboard again. This as a nice contrast against: “In the Netherlands, meanwhile, lawmakers have said that at least four popular games contravene its gambling laws because items gleaned from loot box can be assigned value when they are traded in marketplaces“, which is another issue. you see when you realise that “you can’t sell any digital content that you aren’t authorized to sell” and as we also saw in Venture Beat ““While we forbid the transfer of items and in-game currency outside of the games, we also actively seek to eliminate that where it’s going on in an illegal environment,”“, we see a first part where we can leave it to the Dutch to cater to criminals on any average working day, making the lawmakers (from my personal point of view slightly short sighted).

So, in the end Mattha had a decent article, yet the foundation (the CCG games) which were the creators of the founding concept were left outside the basket of consideration, which is a large booboo, especially when we realise that they are still for sale in all these complaining countries and that in that very same regard these games are not considered gambling, which sets the stage that this was never about gambling, but several desperate EU nations, as well as the US mind you, that they are all realising that loot boxes are billions of close to non-taxable revenues. That is where the issue holds and even as I do not disagree with the honourable men from both Hawaii and Minnesota, the larger group of policy players are all about the money (and the linked limelight), an issue equally left in the dark. There is one issue against Electronic Arts, yet they can fix that before the virtual ink on the web page has dried, so that issue is non-existent as well soon enough.

It’s all in the game and this discussion will definitely be part of the E3 2018, it has reached too many governments not to do so. I reckon that on E3 Day Zero, EA and Ubisoft need to sit down in a quiet room with cold drinks and talk loot box tactics, in that regard they should invite Richard Garfield into their meeting as an executive consultant. He might give them a few pointers to up the profit whilst remaining totally fair to the gamers, a win-win for all I say! Well, not for the politicians and policy makers, but who cares about them? For those who do care about those people, I have a bridge for sale with a lovely view of Balmain Sydney, going cheap today only!


Leave a comment

Filed under Finance, Gaming, IT, Law, Media, Politics

It’s a kind of Euro

In Italy things are off the walls, now we see ‘New elections loom in Italy‘ (at https://www.theguardian.com/world/2018/may/27/italys-pm-designate-giuseppe-conte-fails-to-form-populist-government), where it again is about currency, this time it is Italy that as an issue with ‘country’s Eurozone future‘. In this the escalation is “the shock resignation of the country’s populist prime minister-in waiting, Giuseppe Conte, after Italy’s president refused to accept Conte’s controversial choice for finance minister“, there is a setting that is given, I have written about the folly of the EU, or better stated, the folly it became. I have been in favour of Brexit for a few reasons, yet here, in Italy the setting is not the same. “Sergio Mattarella, the Italian president who was installed by a previous pro-EU government, refused to accept the nomination for finance minister of Paolo Savona, an 81-year-old former industry minister who has called Italy’s entry into the euro a “historic mistake”“, now beside the fact that an 81 year old has no business getting elected into office for a number of reasons, the issue of anti-Euro Paolo Savona have been known for a long time. So as pro-EU Sergio Mattarella decides to refuse anyone who is anti-EU in office, we need to think critical. Is he allowed to do that? There is of course a situation where that could backfire, yet we all need to realise that Sergio Mattarella is an expert on parliamentary procedure, highly educated and highly intelligent with decades of government experience, so if he sets his mind to it, it will not happen. Basically he can delay anti-EU waves for 8 months until after the next presidential elections. If he is not re-elected, the game changes. The EU has 8 months to satisfy the hearts and minds of the Italian people, because at present those options do not look great. The fact that the populist choices are all steering towards non-EU settings is a nightmare for Brussels. They were able to calm the storm in France, but Italy was at the tail end of all the elections, we always knew that, I even pointed it out 2 years ago that this was an option. I did mention that it was an unlikely one; the escalating part is not merely the fact that this populist setting is anti-EU; it is actually much stronger anti Germany, which is a bigger issue. Whether there is an EU or not, the European nations need to find a way to work together. Having the 2 larger players in a group of 4 large players is not really a setting that works for Europe. Even if most people tend to set Italy in a stage of Pizza, Pasta and Piffle, Italy has shown to be a global player and a large one. It has its social issues and the bank and loan debts of Italy don’t help any, but Italy has had its moments throughout the ages and I feel certain that Italy is not done yet, so in that respect finding common ground with Italy is the better play to make.

In all this President Sergio Mattarella is not nearly done, we now know that Carlo Cottarelli is asked to set the stage to become the next Prime Minister for Italy. The Italian elections will not allow for an anti-EU government to proceed to leave the Euro, Sergio’s response was that: “he had rejected the candidate, 81-year-old Eurosceptic economist Paolo Savona, because he had threatened to pull Italy from the single currency “The uncertainty over our position has alarmed investors and savers both in Italy and abroad,” he said, adding: “Membership of the euro is a fundamental choice. If we want to discuss it, then we should do so in a serious fashion.”” (at http://news.trust.org//item/20180527234047-96z65/), so here we all are, the next one that wants to leave the Euro and now there is suddenly an upheaval, just like in France. Here the setting is different, because the Italian President is Pro-EU and he is doing what is legally allowed. We can go in many directions, but this was always going to be an unsettling situation. I knew that for 2 years, although at that stage Italy leaving the EU was really small at that stage. Europe has not been able to prosper its economy, it merely pumped 3 trillion euro into a situation that was never going to work and now that 750 million Europeans realise that they all need to pay 4,000 Euro just to stay where they are right now, that is angering more and more Europeans. the French were warned ahead, yet they decided to have faith in an investment banker above a member of Front Nationale, Italy was not waiting and is now in a stage of something close to civil unrest, which will not help anyone either. Yet the economic setting for Italy could take a much deeper dive and not in a good way. The bigger issue is not just that Carlo Cottarelli is a former International Monetary Fund director. It is that there are more and more issues shown that the dangers are rising, not stabilising or subsiding and that is where someone optionally told President Sergio Mattarella to stop this at all costs. Part of this was seen in April (at https://www.agoravox.fr/actualites/economie/article/a-quand-l-eclatement-de-la-203577). Now the article is in French, so there is that, but it comes down to: “Bridgewater, the largest hedge fund (investment fund – manages $ 160 billion of assets) of the world has put $ 22 billion against the euro area  : the positions down (“sellers”) of the fund prove it bet against many European (Airbus), German (Siemens, Deutsche Bank) French (Total, BNP Paribas) and Italian (Intesa Sanpaolo, Enel and Eni) companies, among others. The company is not known to tackle particular companies, but rather to bet on the health of the economy in general“. So there is a partial setting where the EU is now facing its own version that we saw in the cinema in 2015 with The Big Short. Now after we read the Intro, we need to see the real deal. It is seen with “Since 2011, € 4 billion has been injected into the euro zone (that is to say into commercial banks) by the European Central Bank (ECB), which represents more than a third of the region’s GDP. The majority of this currency is mainly in Germany and Luxembourg, which, you will agree, are not the most difficult of the area. More seriously, much of this liquidity has not financed the real economy through credit to individuals and businesses. Instead, the commercial banks have saved € 2,000bn of this fresh money on their account at the ECB until the end of 2017 (against € 300bn at the beginning of 2011) to “respect their liquidity ratio” (to have enough deposit in liquid currency crisis).As in the United States, quantitative easing allowed the central bank to bail out private banks by buying back their debts. In other words, the debts of the private sector are paid by the taxpayer without any return on investment. At the same time, François Villeroy de Galhau, governor of the Banque de France, called for less regulation and more bank mergers and acquisitions in the EU, using the US banking sector as a model.” Here we see in the article by Géopolitique Profonde that the setting of a dangerous situation is escalating, because we aren’t in it for a mere 4 billion, the Eurozone is in it for €3,000 billion. An amount that surpasses the economic value of several Euro block nations, which is almost impossible to keep with the UK moving away, if Italy does the same thing, the party ends right quick with no options and no way to keep the Euro stable or at its levels, it becomes a currency at a value that is merely half the value of the Yen, wiping out retirement funds, loan balances and credit scores overnight. The final part is seen with “The ECB also warns that the Eurozone risks squarely bursting into the next crisis if it is not strengthened. In other words, Member States have to reform their economies by then, create budget margins and integrate markets and services at the zone level to better absorb potential losses without using taxpayers. A fiscal instrument such as a euro zone budget controlled by a European finance minister, as defended by President Emmanuel Macron, would also help cope with a major economic shock that seems inevitable. Suffice to say that this is problematic given the lack of consensus on the subject and in particular a German reluctance. The European Central Bank has issued the idea late 2017, long planned by serious economists, to abolish the limit of € 100,000 guaranteed in case of rescue operation or bankruptcy bank (Facts & Document No. 443, 15/11 / 17-15 / 12/17 p.8 and 9)” (the original article has a lot more, so please read it!

It now also shows (read: implies) a second part not seen before, with ‘The European Central Bank has issued the idea late 2017, long planned by serious economists, to abolish the limit of € 100,000 guaranteed in case of rescue operation or bankruptcy bank‘, it implies that Emmanuel Macron must have been prepped on a much higher level and he did not merely come at the 11th hour, ‘the idea issued late 2017’ means that it was already in motion for consideration no later than 2016, so when Marine Le Pen was gaining and ended up as a finalist, the ECB must have really panicked, it implies that Emmanuel Macron was a contingency plan in case the entire mess went tits up and it basically did. Now they need to do it again under the eyes of scrutiny from anti-EU groups whilst Italy is in a mess that could double down on the dangers and risks that the EU is facing. That part is also a consideration when we see the quote by Hans-Werner Sinn who is currently the President of the Ifo Institute for Economic Research, gives us “I do not know if the euro will last in the long run, but its operating system is doomed“, yet that must give the EU people in Brussels the strength they need to actually fix their system (no, they won’t). The question becomes how far will the ECB go to keep the Eurozone ‘enabled’ whilst taking away the options from national political parties? that is the question that matters, because that is at play, even as Germany is now opposing reforms, mainly because Germany ended up in a good place after they enforced austerity when it would work and that worked, the Germans have Angela Merkel to thank for that, yet the other nations (like 24 of them), ignored all the signs and decided to listen to economic forecast people pretending to be native American Shamans, telling them that they can make it rain on command, a concept that did not really quite pan out did it? Now the reforms are pushed because there were stupid people ignoring the signs and not acting preventively when they could, now the Eurozone is willing to cater to two dozen demented economists, whilst pissing off the one economy that tighten the belt many years ago to avoid what is happening right now. You see, when the reform goes through Berlin gets confronted with a risk-sharing plan and ends up shouldering the largest proportion of such a machine, that mechanism will avoid the embarrassment of those two dozen Dumbo’s (aka: numnuts, or more academically stated ‘someone who regularly botches a job, event, or situation’), whilst those people are reselling their idea as ‘I have a way where you need not pay any taxes at all‘ to large corporations getting an annual 7 figure income for another 3-7 years. How is that acceptable or fair?

So we are about to see a different Euro, one losing value due to QE, due to Italian unrest and against banks that have pushed their margins in the way US banks have them, meaning that the next 2 years we will most likely see off the wall bonus levels for bankers surpassing those from Wall Street likely for the first time in history, at the end of that rainbow, those having money in Europe might not have that much left. I admit that this is pure speculation from my part, yet when you see the elements and the settings of the banks, how wrong do you think I will be in 2019-2020?

So when we go back to the Guardian article at the beginning and we take a look at two quotes, the first “As the European commission unveiled its economic advice to member states last week, the body’s finance commissioner, Pierre Moscovici, said he was hoping for “cooperation on the basis of dialogue, respect and mutual trust”“. I go with ‘What trust?‘ and in addition with ‘cooperation on the basis of dialogue merely implies that Pierre Moscovici is more likely not to answer question and bullshit his way around the issue‘ and as former French Minister of Economy he could do it, he saw Mark Zuckerberg get through a European meeting never answering any questions and he reckons he is at least as intelligent as Mark Zuckerberg. when we see “Cecilia Malmstöm, said “there are some things there that are worrying” about Italy’s incoming government“, she sees right, the current Italy is actually a lot less Euro minded than the setting was in 2016-2017, so there is a setting of decreased trust that was never properly dealt with, the EU commissions left that untended for too long and now they have an even larger issue to face. So that bright Svenska Flicka is seeing the issues rise on a nearly hourly basis and even as we see the play go nice for now, they will change. I think that in this Matteo Salvini played the game wrong, instead of altering an alternative for Paolo Savona and replace him after Sergio Mattarella is not re-elected, the game could have continued, now they are busting head to head where Matteo is nowhere near as experienced as Sergio is, so that is a fight he is unlikely to win, unless he drops Italy on a stage of civil unrest, which is not a good setting for either player.

We cannot tell what will happen next, but for the near future (June-September), it is unlikely to be a pretty setting, we will need to take another look at the Italian economic setting when the dust settles.


Leave a comment

Filed under Finance, Media, Politics

Grand Determination to Public Relation

It was given yesterday, but it started earlier, it has been going on for a little while now and some people are just not happy about it all. We see this (at https://www.theguardian.com/technology/2018/may/25/facebook-google-gdpr-complaints-eu-consumer-rights), with the setting ‘Facebook and Google targeted as first GDPR complaints filed‘, they would be the one of the initial companies. It is a surprise that Microsoft didn’t make the first two in all this, so they will likely get a legal awakening coming Monday. When we see “Users have been forced into agreeing new terms of service, says EU consumer rights body”, under such a setting it is even more surprising that Microsoft did not make the cut (for now). So when we see: “the companies have forced users into agreeing to new terms of service; in breach of the requirement in the law that such consent should be freely given. Max Schrems, the chair of Noyb, said: “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the agree button – that’s not a free choice, it more reminds of a North Korean election process.”“, which is one way of putting it. The GDPR isd a monster comprised of well over 55,000 words, roughly 90 pages. The New York Times (at https://www.nytimes.com/2018/05/15/opinion/gdpr-europe-data-protection.html) stated it best almost two weeks ago when they gave us “The G.D.P.R. will give Europeans the right to data portability (allowing people, for example, to take their data from one social network to another) and the right not to be subject to decisions based on automated data processing (prohibiting, for example, the use of an algorithm to reject applicants for jobs or loans). Advocates seem to believe that the new law could replace a corporate-controlled internet with a digital democracy. There’s just one problem: No one understands the G.D.P.R.

That is not a good setting, it tends to allow for ambiguity on a much higher level and in light of privacy that has never been a good thing. So when we see “I learned that many scientists and data managers who will be subject to the law find it incomprehensible. They doubted that absolute compliance was even possible” we are introduced to the notion that our goose is truly cooked. The info is at https://www.eugdpr.org/key-changes.html, and when we dig deeper we get small issues like “GDPR makes its applicability very clear – it will apply to the processing of personal data by controllers and processors in the EU, regardless of whether the processing takes place in the EU or not“, and when we see “Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it” we tend to expect progress and a positive wave, so when we consider Article 21 paragraph 6, where we see: “Where personal data are processed for scientific or historical research purposes or statistical purposes pursuant to Article 89(1), the data subject, on grounds relating to his or her particular situation, shall have the right to object to processing of personal data concerning him or her, unless the processing is necessary for the performance of a task carried out for reasons of public interest“, it reflects on Article 89 paragraph 1, now we have ourselves a ballgame. You see, there is plenty of media that fall in that category, there is plenty of ‘Public Interest‘, yet when we take a look at that article 89, we see: “Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject.“, so what exactly are ‘appropriate safeguards‘ and who monitors them, or who decided on what is an appropriate safeguard? We also see “those safeguards shall ensure that technical and organisational measures are in place in particular in order to ensure respect for the principle of data minimisation“, you merely have to look at market research and data manipulation to see that not happening any day soon. Merely setting out demographics and their statistics makes minimisation an issue often enough. We get a partial answer in the final setting “Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner.” Yet pseudonymisation is not all it is cracked up to be, When we consider the image (at http://theconversation.com/gdpr-ground-zero-for-a-more-trusted-secure-internet-95951), Consider the simple example of the NHS, as a patient is admitted to more than one hospital over a time period, that research is no longer reliable as the same person would end up with multiple Pseudonym numbers, making the process a lot less accurate, OK, I admit ‘a lot less‘ is overstated in this case, yet is that still the case when it is on another subject, like office home travel analyses? What happens when we see royalty cards, membership cards and student card issues? At that point, their anonymity is a lot less guaranteed, more important, we can accept that those firms will bend over backward to do the right thing, yet at what state is anonymisation expected and what is the minimum degree here? Certainly not before the final reports are done, at that point, what happens when the computer gets hacked? What was exactly an adequate safeguard at that point?

Article 22 is even more fun to consider in light of banks. So when we see: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“, when a person applies for a bank loan, a person interacts and enters the data, when that banker gets the results and we no longer see a approved/denied, but a scale and the banker states ‘Under these conditions I do not see a loan to be a viable option for you, I am so sorry to give you this bad news‘, so at what point was it a solely automated decision? Telling the story, or given the story based on a credit score, where is it automated and can that be proven?

But fear not, paragraph 2 gives us “is necessary for entering into, or performance of, a contract between the data subject and a data controller;” like applying for a bank loan for example. So when is it an issue, when you are being profiled for a job? When exactly can that be proven that this is done to yourself? And at what point will we see all companies reverting to the Apple approach? You no longer get a rejection, no! You merely are not the best fit at present time.

Paragraph 2c of that article is even funnier. So when I see the exception “is based on the data subject’s explicit consent“, We cannot offer you the job until you passed certain requirements that forces us to make a few checks, to proceed in the job application, you will have to give your explicit consent. Are you willing to do that at this time? When it is about a job, how many people will say no? I reckon the one extreme case is dopey the dwarf not explicitly consenting to drug testing for all the imaginable reasons.

And in all this, the NY Times is on my side, as we see “the regulation is intentionally ambiguous, representing a series of compromises. It promises to ease restrictions on data flows while allowing citizens to control their personal data, and to spur European economic growth while protecting the right to privacy. It skirts over possible differences between current and future technologies by using broad principles“, I do see a positive point, when this collapses (read: falls over might be a better term), when we see the EU having more and more issues trying to get a global growth the data restrictions could potentially set a level of discrimination for those inside and outside the EU, making it no longer an issue. What do you think happens when EU people get a massive boost of options under LinkedIn and this setting is not allowed on a global scale, how long until we see another channel that remains open and non-ambiguous? I do not know the answer; I am merely posing the question. I don’t think that the GDPR is a bad thing; I merely think that clarity should have been at the core of it all and that is the part that is missing. In the end the NY Times gives us a golden setting, with “we need more research that looks carefully at how personal data is collected and by whom, and how those people make decisions about data protection. Policymakers should use such studies as a basis for developing empirically grounded, practical rules“, that makes perfect sense and in that, we could see the start, there is every chance that we will see a GDPRv2 no later than early 2019, before 5G hits the ground, at that point the GDPR could end up being a charter that is globally accepted, which makes up for all the flaws we see, or the flaws we think we see, at present.

The final part we see in Fortune (at http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/), you see, even as we think we have cornered it with ‘AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It‘, we need to take one step back, it is not about the AI, it is about machine learning, which is not the same thing. With Machine learning it is about big data, see when we realise that “Big data challenges purpose limitation, data minimization and data retention–most people never get rid of it with big data,” said Edwards. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting… Algorithmic transparency means you can see how the decision is reached, but you can’t with [machine-learning] systems because it’s not rule-based software“, we get the first whiff of “When they collect personal data, companies have to say what it will be used for, and not use it for anything else“, so the criminal will not allow us to keep their personal data, to the system cannot act to create a profile to trap the fraud driven individual as there is no data to learn when fraud is being committed, a real win for organised crime, even if I say so myself. In addition, the statement “If personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process“, which comes close to a near impossibility. In the age where development of AI and using machine learning to get there, the EU just pushed themselves out of the race as they will not have any data to progress with, how is that for a Monday morning wakeup call?


Leave a comment

Filed under IT, Law, Media, Politics, Science