Category Archives: IT

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at, the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.



Leave a comment

Filed under Finance, IT, Media, Science

Why would we care?

New York is all up in sixes and sevens, even as they aren’t really confused, some are not seeing the steps that are following and at this point giving $65 billion for 21st Century Fox is not seen in the proper light. You see, Comcast has figured something out, it did so a little late (an assumption), but there is no replacement for experience I reckon. Yet, they are still on time to make the changes and it seems that this is the path they will be walking on. So when we see ‘Comcast launches $65bn bid to steal Murdoch’s Fox away from Disney‘, there are actually two parties to consider. The first one is Disney. Do they realise what they are walking away from? Do they realise the value they are letting go? Perhaps they do and they have decided not to walk that path, which is perfectly valid. The second is the path that Comcast is implied to be walking on. Is it the path that they are planning to hike on, or are they merely setting the path for facilitation and selling it in 6-7 years for no less than 300% of what it is now? Both perfectly valid steps and I wonder which trajectory is planned, because the shift is going to be massive.

To get to this, I will have to admit my own weakness here, because we all have filters and ignoring them is not only folly, it tends to be an anchor that never allows us to go forward. You see, in my view the bulk of the media is a collection of prostitutes. They cater in the first to their shareholders, then there stakeholders and lastly their advertisers. After that, if there are no clashes, the audience is given consideration. That has been the cornerstone of the media for at least 15 years. Media revolves around circulation, revenue and visibility, whatever is left is ‘pro’ reader, this is why you see the public ‘appeal’ to be so emotionally smitten, because when it is about emotion, we look away, we ignore or we agree. That is the setting we all face. So when a step like this is taken, it will be about the shareholders, which grows when the proper stakeholders are found, which now leads to advertising and visibility. Yet, how is this a given and why does it matters? The bottom dollar will forever be profit. Now from a business sense that is not something to argue with, this world can only work on the foundation of profit, we get that, yet newspapers and journalism should be about proper informing the people, and when did that stop? Nearly every paper has investigative journalism, the how many part is more interesting. I personally belief that Andrew Jennings might be one of the last great investigative journalists. It is the other side of the coin that we see ignored, it is the one that matters. The BBC (at gives us: “Reporter Andrew Jennings has been investigating corruption in world football for the past 15 years“, the question we should ask is how long and how many parties have tried to stop this from becoming public, and how long did it take Andrew Jennings to finally win and this is just ONE issue. How many do not see the light of day? We look at the Microsoft licensing corruption scandal and we think it is a small thing. It is not, it was a lot larger. Here I have a memory that I cannot prove, it was in the newspapers in the Netherlands. On one day there was a small piece regarding the Buma/Stemra and the setting of accountancy reports on the overuse of Microsoft licenses in governments and municipality buildings and something on large penalty fees (it would have been astronomical). Two days later another piece was given that the matter had been resolved. The question becomes was it really? I believe that someone at Microsoft figured out that this was the one moment where on a national level a shift to Linux would have been a logical step, something Microsoft feared very very much. Yet the papers were utterly silent on many levels and true investigation never took place and after the second part, some large emotional piece would have followed.

That is the issue that I have seen and we all have seen these events, we merely wiped it from our minds as other issues mattered more (which is valid). So I have no grate faith (pun intended) into the events of ‘exposure‘ from the media. Here it is not about that part, but the parts that are to come. Comcast has figured out a few things and 21st Century Fox is essential to that. To see that picture, we need to look at another one, so it is a little more transparent. It also shows where IBM, Google, Apple and some telecom companies are tinkering now.

To see this we need to look at this first image and see what there is, it is all tag based, all data and all via mobile and wireless communication. Consider these elements; over 90% of car owners will have them: ‘Smart Mobility, Smart Parking and Traffic priority‘. Now consider the people who are not homeless: ‘Smart grids, Utility management, hose management like smart fridges, smart TV and data based entertainment (Netflix)‘ and all those having smart house devices running on what is currently labelled as Domotics, it adds up to Megabytes of data per household per day. There will be a run on that data from large supermarket to Netflix providers. Now consider the mix between Comcast and 21 Century Fox. Breaking news, new products and new solutions to issues you do not even realise in matters of eHealth, road (traffic) management and the EU set 5G Joint-Declarations in 2015, with Japan, China, Korea and Brazil. The entire Neom setup in Saudi Arabia gives way that they will soon want to join all this, or whoever facilitates for the Middle East and Saudi Arabia will. In all this with all this technology, America is not mentioned, is that not a little too strange? Consider that the given 5G vision is to give ‘Full commercial 5G infrastructure deployment after 2020‘ (expected 2020-2023).

With a 740 million people deployed, and all that data, do you really think the US is not wanting a slice of data that is three times the American population? This is no longer about billions, this will be about trillions, data will become the new corporate and governmental currency and all the larger players want to be on board. So is Disney on the moral high path, or are the requirements just too far from their own business scope? It is perhaps a much older setting that we see when it is about consumer versus supplier. We all want to consume milk, yet most of us are not in a setting where we can be the supplier of milk, having a cow on the 14th floor of an apartment tends to be not too realistic in the end. We might think that it is early days, yet systems like that require large funds and years to get properly set towards the right approach for deployment and implementation. In this an American multinational mass media corporation would fit nicely in getting a chunk of that infrastructure resolved. consider a news media tagging all the watchers on data that passes them by and more importantly the data that they shy away from, it is a founding setting in growing a much larger AI, as every AI is founded on the data it has and more important the evolving data as interaction changes and in this 5G will have close to 20 times the options that 4G has now and in all this we will (for the most) merely blindly accept data used, given and ignored. We saw this earlier this year when we learned that “Facebook’s daily active user base in the U.S. and Canada fell for the first time ever in the fourth quarter, dropping to 184 million from 185 million in the previous quarter“, yet the quarter that followed the usage was back to 185 million users a day. So the people ended up being ‘very’ forgiving, it could be stated that they basically did not care. Knowing this setting where the bump on the largest social media data owner was a mere 0.5405%; how is this path anything but a winning path with an optional foundation of trillions in revenue? There is no way that the US, India, Russia and the commonwealth nations are not part of this. Perhaps not in some 5G Joint-Declarations, but they are there and the one thing Facebook clearly taught them was to be first, and that is what they are all fighting for. The question is who will set the stage by being ahead of schedule with the infrastructure in place and as I see it, Comcast is making an initial open move to get into this field right and quick. Did you think that Google was merely opening 6 data centres, each one large enough to service the European population for close to 10 years? And from the Wall Street journal we got: “Google’s parent company Alphabet is eyeing up a partnership with one of the world’s largest oil companies, Aramco, to aid in the erection of several data centres across the Middle Eastern kingdom“, if one should be large enough to service 2300% of the Saudi Arabian population for a decade, the word ‘several‘ should have been a clear indication that this is about something a lot larger. Did no one catch up on that small little detail?

In that case, I have a lovely bridge for sale, going cheap at $25 million with a great view of Balmain, first come, first serve, and all responsibilities will be transferred to you the new predilector at the moment of payment. #ASuckerIsBornEachMinute

Oh, and this is not me making some ‘this evil Google‘ statement, because they are not. Microsoft, IBM, and several others are all in that race; the AI is merely the front of something a lot larger. Especially when you realise that data in evolution (read: in real-time motion) is the foundation of its optional cognitive abilities. The data that is updated in real-time, that is the missing gem and 5G is the first setting where that is the set reality where it all becomes feasible.

So why would we care? We might not, but we should care because we are the foundation of all that IP and it will no longer be us. It gives value to the users and consumes, whilst those who are not are no longer deemed of any value, that is not the future, it is the near future and the founding steps for this becoming an actual reality is less than 60 months away.

In the end we might have merely cared too late, how is that for the obituary of any individual?


Leave a comment

Filed under Finance, IT, Media, Science

Where we are in gaming

So the E3 is almost done. I saw the EA bit, I was blown away by Bethesda where they ended the presentation with 13.2 seconds announcing the Elder Scrolls VI. A mere teaser, but what a teaser, the crowd went insane on the spot (me included). I reckon that it will be a 2019 release and we will hear a lot more after the release of Fallout 76 later this year. When it comes to Fallout 76 it will be a lot bigger than ever before. It allows for single play, friends play and multiplay. That is merely the first part, the second part is that Fallout 76 is announced to be 4 times the size of Fallout 4, so any Bethesda fan expecting to be well rested by Christmas better start buying stocks and options in Red Bull, as they will need it and lots of it.

There was a lot more announced, most importantly the setting of a new free game, called Blades, an elder scrolls version of Fallout shelter, a very different one, Bethesda went one step further where the game is fully playable in portrait and landscape mode, the view on the game made me desire an immediate update to my mobile (which is falling apart anyway). In addition, Fallout Shelter became available at that point for both PS4 and Nintendo Switch. So Bethesda is not sitting still and a lot of it at no cost at all, showing a level of gamer care that we have not seen to this level before. Bethesda blew us away with the upcoming DLC’s, updates and new games. After that it was time for Microsoft. I have had issues with Microsoft and they are still growing, yet the presentation given was really good. Phil Spencer knows his shit and that of many other players in this field. He knows what it is about and as we saw all kinds of ‘world premieres’, it relied to some degree on both Bethesda and Ubisoft to give some of the goods, but that was not all. I stated it before, I am not a racing fan, but Forza 7 blew me away, it was astounding to see, so I was not ready for what happened next. If Forza 7 is set at as a 90%-91% game, the upcoming Forza Horizons 4 is getting us straight to the 100% mark. They really outdid themselves there. It is set in historical England, all of England and if you think that Forza 7 had the goods, seeing seasons and weather set into the driving, seeing every place go through the 4 seasons, you will see something totally unique and there is no doubt that if it holds up on the Xbox One X on 4K and 60fps, you are in for a treat, even a non-racing fan like me can see that this is something totally new. There were also announcements on gaming houses and developers bought as well as some of the indie developers who are showing excellent products. Phil Spencer is making waves; he is not out of the woods as he has to clean up the mess of two predecessors, so he has his work cut out for him. There was also a less nice part. They did in many cases give not any release date, merely ‘pre-order it at the Microsoft store‘. I personally believe that this is the Microsoft path, a path that was dangerous and I accused them for not being in consideration of gamers. There was more. You see, Microsoft is moving to take the shops out of the equation. They were doing it to some extent (poorly I might add), yet now when we consider Gamerpass “Xbox Game Pass launched back in June, and provides access to more than 100 Xbox One and Xbox 360 games for $10.95 per month“, before you think that this is a lot, consider that you get access to 100 games, with the announced NEW games, we got that it will include the next Halo, Gears of War, and Forza on launch day. So that is a massive teaser, yet I am also scared of the intentions of Microsoft. I have seen this before. You see, TechAU gave away the speculated goods with “selling games is no longer an option. With console hard drive storage sizes increasing to 1-2TB, its possible we need to rethink game ownership completely. The big question will be the games available. If they’re all games from 6-12 months ago, it may still be seen as a good opportunity to play a bunch of games you meant to buy, but never got around to it“, if only it was true, because they already dropped the ball twice on that one. You see, I saw a similar play in the late 80’s by The Evergreen Group, they had government backing and undercut the competition for years, after that when the bulk was gone, the prices went back up and they were close to the only player remaining. It seems that Microsoft is on a similar path and when we saw the Faststart part I got a second jolt of worry, with the stating that they used machine learning to see how gamers play. This implies the profiling of all players, so when exactly did you as a gamer agree to that? You see, when this becomes personalised, it is not about the average player, this is about you as the individual player and I personally believe that the push ‘to pre-order at the Microsoft store‘ is not merely marketing, it is about pushing for online only and take the shops out of the equation. It makes sense from a business point of view, yet you end up with the only IP, the ones they allow you to have, for whatever time you end up having it. I never signed up for that, and even if we love the offering they give for now. When the shops can no longer support this theory, what happens then? How will you feel in 5 years when your IP is based on a monthly rental? It is a dangerous part and for now you think it does not matter, but it does, you see the earlier quote ‘With console hard drive storage sizes increasing to 1-2TB‘, yet the Xbox One X is merely 1TB, so there is that already, then we realise that the 1 TB merely gets you 800 GB (OS and other spaces reserved), so now we see that the previous Gears of War was 103.12 GB, implies that with one game installed, you are down to less than 70%, now add Halo 5: Guardians (97.53GB) and Forza 7 (100GB). So, only 3 games and 50% of the total drive space is gone (those mentioned games were the largest ones).

So when I see the mention of space for 12 games, I wonder how correct it is. Now consider the announced games like Fallout 76, the Division 2, Beyond Good and Evil 2 and wonder what will be left. People will wake up much to soon as they have to reorganise their console drives, way too early in 2018. Consider, not just the games, but the patches as well. Now you start seeing the dangers you as a gamer face. The moment that 120 million gamers start working in an online setting (PS4, XB1 and Switch), how long until the telecom bandwidth prices go up? How affordable will gaming remain? For now it looks great, but the bandwidth fountain will be soured, the impact is not short term when it hits, and the impact will be too great to consider for now and the Telco companies have not even considered the dangers, only their option towards optional revenue. There is supporting evidence. In Australia, its fun loving product called NBN had 27,000 complaints last year alone. If the old setting for every complaints 5 people did not bother, we see a much larger issue. With issues like outage and slow data speeds one number (source: ABC) gives us that at present the growth of 160% of complaints ‘equated to 1 per cent of the activated premises‘, how is that to sit in whilst downloading 100 Gb for your Xbox One X, and that is merely Australia. In places like London in full setting congestion will be a normal thing to worry about. So when we see “Julie Waites said her 85-year-old mother Patricia Alexander has been without a working phone at her Redcliffe home, north of Brisbane, since June when the NBN was connected in the area“, which we see 4 months after the event, there is a much larger issue and Microsoft did not consider the global field, an error they made a few times before and that is the setting that gamers face, So when your achievements are gone because for too long there was an internet issue, consider where your hard earned achievements went off to. I am certain that it is not all Microsoft’s fault, but its short sighted actions in the past are now showing to become the drag regarding gaming.

The one part that Microsoft does care about is its connections to places like Bethesda and Ubisoft, who in their presentation show to be much larger players. We get that this is merely beta and engine stuff, but the presentation of the Division 2 rocked, I am not sure how the Crew 2 will do, but it looked awesome, in addition the EA games looked as sweet as sport games can get on the Xbox One, so they have the goods. Phil Spencer is making waves and he is showing changes, but how trusting will this audience remain to be after a mere two incidents where gaming was not possible due to reasons not in the hands of Microsoft? Their support division stated last year that the uploaded data from my console (not by me, were all the responsibility of the internet provider), are you kidding me? Yet the games do look good, there is no denying that, yet their infrastructure might be the Achilles heel that they face in the coming year. There was also time for the upcoming AC game called Odyssey. It is very similar to the look of Origin in looks. Graphically it is stunning. The view also shows that AC is changing; it has a much larger political impact in the story line and the changes you can make. It is a lot more RPG based than ever before, which as an RPG lover is very much appreciated and with the choice of a male or a female player is also a change for good, unlike AC Syndicate the choice will be for the duration of the game, making it a much larger replayable challenge. The demo shows that there definitely are changes, some are likely gamer requests, the rest seems to be a change to make the game more appealing regarding the play style you choose, but that part is speculation from my side. I would want to be cautious, yet they truly took the game to the next level with AC Origin, which makes me give them the benefit of the doubt. The setting that Ubisoft brings is much stronger than last year, so it could end up being a stellar year for Ubisoft. When we get to Sony, I become a little cautious. Yet even as we saw it in the previous presentation, instead of merely presenting titles, having live music on stage, the music from the games was a really nice touch. I do not know about you the gamer, yet I have been more and more connected to the music as the quality of it has been on the rise, so seeing the performances was well appreciated. It might have started as early as ACII and Oblivion, now we see that good music is a much larger requirement in any game. A much darker the Last of Us 2 (if that was even possible) sets the stage for what is to come. Yet, even as we see awesome presentations of what is to come, I have to admit that Microsoft did have a better presentation. Sony is also playing the ‘store’ setting with PlayStation Store for bonus options. The games are overwhelming and those are merely the exclusive titles. When we consider all that Ubisoft and Square Enix bring to the table, it shows to be a great year for all the PlayStation owners. Yet, the overwhelming advantage that they have over Microsoft is not as much as you would think. The question becomes how heavy the overbearing advantage that the Last of Us 2, Ghost of Tsushima and Spiderman have, yet when set opposite Forza Horizon 4, Halo Infinite and Sea of Thieves I wonder if it remains a large advantage. Sony has more to offer yet the overwhelming exclusive benefit is not really there. So when we look at a new Resident Evil, actually a remade version of Resident Evil 2, we remain happy for the ‘unhealthy’ life diminishing gaming treats that are offered; both consoles will be offering gaming goods we all desire. There is no doubt that gaming revenue will go through the roof and it seems that we are in a setting where games are not just on the rise, the predictions are that they will grow the market in nearly every direction, and we still have to hear from Nintendo, you see that one is important for both Microsoft and Sony. There is little doubt that they will surpass the Xbox One in total sales, yet now it becomes the setting where they might be able to pull this off as early as thanksgiving, a setting Microsoft is not ready for, the ‘most powerful console‘ will optionally get surpassed by the weakest one as Microsoft has not kept its AAA game for close to two years. Three simple changes could have prevented that, yet the view and setting of always online, GamerPass and storage destroyed it, the mere consideration of infrastructure was missed by Americans focused on local (US) infrastructure and forgetting that the optional 92.3% of the desired customer base lives outside of the USA. The simplest of considerations missed, how is that as a hilarious setting? Oh and getting back to the Sony presentation, if you thought God of War surpassed your expectations, it seems (from the demo) that Spiderman is likely to equal if not surpass that event, so there is one issue that the others will have to deal with, the PlayStation players (Xbox One players too) will just have to wait and be overwhelmed with the number of excellent games coming their way before Christmas, because for both of them the list seems to be the largest list ever. I am posting this now and perhaps update a few Nintendo settings, as there are several revelations coming. GeekWire gives us in all this “Microsoft’s Xbox One still faces an uphill climb vs. Sony and Nintendo“, yet the article (at misses out. You see, even if we are to agree with “Microsoft has effectively made its own console irrelevant, because even with the Windows Anywhere initiative, there’s no particular reason for a dedicated enthusiast to own an “Xbone” if you already have a PC. There are certainly advantages, such as ease of use, simplicity of play, and couch gaming, but the same money you spend on the Xbox could be going to tune up your computer so you can play the same games in a higher resolution“, we see the truth, but a wrong one. You see ‘the same money you spend on the Xbox could be going to tune up your computer“, is not correct. We need to consider “you can find a large number of 3840×2160-resolution displays in the $300 to $500 range“, as well as “For a better 4K experience, look to the $450 GeForce GTX 1070 Ti, $500 GeForce GTX 1080, and $500 Radeon RX Vega 64, though the Radeon card is still suffering from limited availability and inflated prices. These cards still won’t hit a consistent 60 fps at 4K resolution in the most strenuous modern games“, so you are down for a lot more than the price of the Xbox One X and still not get the promised 60fps that the Xbox One X delivers. And that is before you realise that a TV tends to be 4 times the size of a PC display. The biggest issue that has not been resolved is the mere stupidity of 6mm of space, that allows for a 3TB Seagate BarraCuda, it would have diminished most other issues, now merely evolve the operating system requiring people to be online all the time and Microsoft would have created an optional winning situation. It should not impact the need (or desire) for GamerPass and it would change the curve of obstruction by well over 70% overnight, all that when you consider that there is a $65 difference for 300% storage, something that the 4K community needs. Phil Spencer has one hell of a fight coming his way and if he can counter the Microsoft stupidity shown up to now, he could potentially turn the upcoming number three position around in 2019, making Microsoft a contender again at some point, yet if the short-sighted board of Microsoft is not willing to adhere to some views, they will lose a lot more than just the loss of a few hundred millions of console development, they might lose a large customer population forever, because gamers hold a grudge like no other and if it was not merely the cost of the console, the fact that the games they bought might overtake the total amount spend by close to 3:1, and once gone they will never ever return. That is the stage we see now and even as there is a lot of improvement, where it matter no changes were made. So even as we should all acknowledge that Phil Spencer is a large change for the better, Microsoft needs to do more. They have the benefit that Sony gave a good show, yet not as good as Microsoft. Perhaps the live presentations are the E3 part we all desire, the demos and previews were all great on both systems. In that regard Ubisoft and Bethesda both brought their homerun at the E3 and they are well deserved ones. As both deliver to both consoles there were no losses on either side, only wins for both sides, yet that leaves the small devil in my brain considering the question. If Fallout76 is 4 times the size of Fallout 4 which (according to Eurogamer) ‘required 100GB install sizes as a minimum‘ for 4K. So how much more will Fallout 76 need? It is in that light that we need to look with a 1TB drive, something I saw coming 5 years ago. So now, whomever buys a 1TB system will soon (too soon) stop being happy. That is one of the fights Phil Spencer will face soon enough, an issue that could have been prevented 6 years ago. It is so cool to see all these games coming, whilst we see a storage system supporting merely part of what comes and that is before we see the network congestion as a few million people try to update their game and get access to the networking facilities. It was an issue that haunted Ubisoft with the initial Division in 2016. When we saw ‘I’m still at work, had to stay overtime and I’m really salty because I might not even play today because of all this server downtime‘, I merely stated that they could have seen that one coming a mile away. Ubisoft upgraded everything and I do not expect to see this in the Division 2, yet consider that it is not merely one game. Consider every gamer getting issues when they want to access Gears 5 and Halo Infinite on launch day. That is the issue we could see coming and in all honesty, in most cases it will not even be the fault of Microsoft at all. The evidence was seen in Australia merely a week ago when ABC treated us to “NBN Co chief executive Bill Morrow suggested that “gamers predominantly” were to blame for the congestion across the National Broadband Network. He later clarified that he wasn’t blaming gamers for congestion, but reiterated that they are “heavy users”“, that is the reality setting, where Counter-Strike: Global Offensive and Destiny 2, two games are 49% of the average hourly bandwidth usage, now add Fallout 76, Gears 5, the Division 2, EA Access and Microsoft GamerPass. You still think I am kidding? And that is merely Australia, now add London congestion and when we consider some news sources give us: “London, Singapore, Paris and New York taking top spots when we consider internet congestion“, I reckon that Europe has issues to a much larger extent. When we consider in addition that the Deutsche Welle gave us last January “A new report has found that only a small fraction of German users get the internet speeds that providers promise“, as well as “the problem is only getting worse“. That is the setting Microsoft is starting to push for and the gamers will not be enjoying the dangers that this will bring. Certain high level non thinkers at Microsoft are making this happen and now Phil Spencer will be faced with the mess that needs cleaning up. The part that many have been ignoring and it will hit Microsoft a lot harder, especially when it wants to move away from uphill battles, a sign that we cannot ignore and whilst the plan might be valid in 4-6 years, the shortage that the hardware and infrastructure gives at present will not be solved any day soon and that is counting against Microsoft. The impact will hit Nintendo as well, but not nearly as hard. The evidence is out there, yet some analysts seem to have taken it out of the equation. Is that not an interesting view that many ignored?

So we are moving forward in gaming, no one denies that, but overall, some cards (like always online) were played much too early and it will cost one player a hell of a lot more than they bargained for.


Leave a comment

Filed under Gaming, IT, Media, Science

The gaming E-War is here

The console operators are seeing the light. Even as it comes with some speculation from the writers (me included), we need to try and take a few things towards proper proportions. It is a sign of certain events and Microsoft is dropping the ball again. The CNet news (at gives us “Microsoft’s big E3 sale on Xbox consoles, games starts June 7“, where we see “Save 50 percent or more on season passes, expansions and DLC and other add-ons“, which sounds good, yet in opposition, some claim that as Microsoft has nothing really new to report (more correctly, much too little to report), they want to maximise sales now hoping to prevent people to move away from the Xbox. I do not completely agree. Even as the setting of no new games is not completely incorrect, the most expected new games tend to not get out in the first month after the E3 (they rarely do), so Microsoft trying to use the E3 to cash in on revenue is perfectly sound and business minded. Out with the old and in with the new as some might say. Yet, Microsoft has been dropping the ball again and again and as more and more people are experiencing the blatant stupidity on the way Microsoft deals with achievements and now we see that these scores are too often unstable (I witnessed this myself), we see that there is a flaw in the system and it is growing, in addition, I found a flaw in several games where achievements were never recognised, implying that the flaw is a lot larger and had been going on for more than just a month or so. The one massive hit that the Xbox360 created is now being nullified, because greed made Microsoft set what I refer to ‘the harassment policy’ of ‘always online‘, this is now backfiring, because it potentially drives people to the PlayStation, who fixed that approach 1-2 years ago (some might prefer the Nintendo Switch). Nintendo needs to fix their one year calendar issue fast before it starts biting them (if they have fixed it, you have my apologies).

Sony is not sitting still either as Cnet reports (at, with the quote “Starting Wednesday, June 6, the company will spoil one announcement each and every day for five days in a row. Sony is being tight-lipped about the details, but those announcements will include [censored]“. Yet getting back to Microsoft, they do need and should get recognition for “Up to 75% off select games including Monster Hunter: World, Sea of Thieves and PlayerUnknown’s Battlegrounds“. I admit that a game like monster hunter is an acquired taste, yet 75% off from a 95% rated game like Monster Hunter is just amazing and that game alone is worth buying the Xbox One X for. I only saw the PlayStation edition, yet the impression was as jaw dropping as seeing the 4K edition of AC Origin, so not seriously considering that game at 75% discount is just folly.

The issue is mainly what Microsoft is aiming for (and optionally not telling the gamers). They never made any secret of their desire for the cloud, I have nothing against the cloud, yet when I play games in single player mode, there is no real reason for the cloud (there really is not). So when I see that Microsoft bought GitHub for a little less than 10 billion, we should seriously consider that this is affecting the Xbox One in the future, there is no way around it. Even as we see the Financial Times and the quotes of optional consideration “Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” a claim from CEO Satya Nadella. He can make all the claims he like, yet when we consider that this is a setting of constant updates, upgrades and revisions, we see the possible setting where a gamer faces the hardship that the Oracles DBM’s faced between versions 5 and 7. A possible nearly daily setting of checking libraries, updates and implementations to installed games. Yes, that is the real deal a gamer wants when he/she gets home! (Reminder: the previous part was highly speculative)

As we get presentations from the marketeers, those who brought us ‘the most powerful console on the market‘, they are likely to bring slogans in the future like ‘games that are many times larger than the media can currently hold‘, or perhaps ‘games with the option of bringing additions down the track without charge‘, or my favourite ‘games growing on every level, including smarter enemies‘. All this requires updates and upgrades, yet the basic flaw on the Xbox needing extra drives, extra hardware and power points, whilst increasing the amount of downloads with every month such a system is running is not what we signed up for, because at that point getting a gaming PC is probably the better solution. A business setting aimed at people who wanted to have fun. This is exactly the setting that puts the AU$450 PS4, AU$525 and AU$450 Nintendo Switch on the front of the mind of every gamer soon enough.

The elemental flaw that the system holds is becoming an issue for some and when (or if) they decide to push to the cloud to that extent the issues I give will only grow. Now, I will state that in a multiplayer environment, a GitHub setting has the potential to be ground breaking and my making fun with the slogans I gave in Orange, could be the true devastating settings that will form an entirely new domain in multiplayer gaming. Yet we are not there yet and we will not be there yet for some time to come. Even as Ubisoft is getting better and they did truly push the edge with AC Origin, you only have to think back to The Division, the outages and connection issues. The moment that this hits your console for single player that is the moment when you learned the lesson too late. In similar view we can state that the lessons that we learned with Ubisoft Unity, what I call clearly bad testing and perhaps a marketing push to get the game out too early ‘to satisfy shareholders‘, whilst gamers paid AU$99 for a game needing a ‘mere’ patch, which was stated in the media in 2014 as: “The fourth patch for Assassin’s Creed: Unity arrived yesterday as a sizable 6.7 GB download. At least, that’s the case for non-Xbox One players; some players using the Microsoft console are facing 40 GB downloads for the patch“. Think of that nightmare hitting your console in the future, and with the cloud the issues actually becomes more dangerous as patches were not properly synched and tested. That was the fourth, and that was before 4K gaming became the 4K option on consoles, which would have made the Unity download a speculated 80GB, over 10% of the available space of an empty Xbox One. Now, you must consider that such patches would be enormous on the PS4 pro as well, that whilst Microsoft could have prevented 40% of the issues of the issues we are faced well over a year ago, now consider how you want your gamer life to be. Do you still feel happy at present?

Oh, and Sony is not out of the woods either, even as some are really happy with the PS4Pro, it must be clearly stated that there are enough issues with frame rates on several games, all requiring their own patch, which is not a great setting for Sony to face. Even as the new games are more than likely up to scrap and previously released games like Witcher 3 are still getting patches and upgrades, the fact that God of war had issues was not a great start; the game looked amazing on either system. Still, when it comes to fun, it seems that Nintendo has the jump on both Sony and Microsoft. The Splatoon 2 weapons update (lots more weapons) is just one of the setting that will entice the Nintendo fans not put away their copy of Splatoon 2 any day soon. In addition, Amazon implied that Fallout 76 will be coming to the Nintendo switch, which is a new setting for both Sony and Microsoft. For those imagining that this is a non-issue because of the graphics need to play Metroid Prime on a GameCube and watch it being twice the value that Halo one and two gave on an Xbox (with their much higher resolution graphics). The mistaking belief that high-res graphics are the solution to everything clearly has never seen how innovative gaming on a Nintendo outperforms ‘cool looking images‘ every single time. Now that Bethesda is seeing the light, we could be in for a new age of Vault-Tec exploration, but that is merely my speculated view. That being said, the moment we see Metroid Prime 1 and 2, as well as Pikmin and Mario Sunshine on Switch that will be the day that both my Xbox One and Ps4 will be gathering dust for weeks. These games are that much more fun. I just do hope that it will not overlap with the release of some PS4 games I have been waiting for (like Spiderman), because that in equal measure implies that I need to forgo on hours of essentially needed sleep. Mother Nature tends to be a bitch when it boils down to natural needed solutions (I personally do not belief in a red bull life to play games).

So as we are in the last 4 days before the E3 begins, we are more and more confronted with speculations and anticipation. Cnet was good enough to focus on released facts, which is awesome at present. Yet we are all awaiting the news. That being said, the leaks this year has been a lot larger and revealed information has been on overload too. It might be the first sign that the E3 events could be winding down. There had been noise on the grapevine a few weeks ago, yet I was not certain how reliable that information was. The leaks and pre-release information does imply that E3 is no longer the great secret basket to wait for as it was in previous years. We will know soon, so keep on gaming and no matter which console your heart belongs to, make sure you have fun gaming!


Leave a comment

Filed under Gaming, IT, Media, Science

Data illusions

Yesterday was an interesting day for a few reasons; one of the primary reasons was an opinion piece in the Guardian by Jay Watts (@Shrink_at_Large). Like many article I considered to be in opposition, yet when I reread it, this piece has all kinds of hidden gems and I had to ponder a few items for an hour or so. I love that! Any piece, article or opinion that makes me rethink my position is a piece well worth reading. So this piece called ‘Supermarkets spy on them now‘ (at has several sides that require us to think and rethink issues. As we see a quote like “some are happy to brush this off as no big deal” we identify with too many parts; to me and to many it is just that, no big deal, but behind the issues are secondary issues that are ignored by the masses (en mass as we might giggle), yet the truth is far from nice.

So what do we see in the first as primary and what is behind it as secondary? In the first we see the premise “if a patient with a diagnosis of paranoid schizophrenia told you that they were being watched by the Department for Work and Pensions (DWP), most mental health practitioners would presume this to be a sign of illness. This is not the case today.” It is not whether this is true or not, it is not a case of watching, being a watcher or even watching the watcher. It is what happens behind it all. So, when we recollect that dead dropped donkey called Cambridge Analytics, which was all based on interacting and engaging on fear. Consider what IBM and Google are able to do now through machine learning. This we see in an addition to a book from O’Reilly called ‘The Evolution of Analytics‘ by Patrick Hall, Wen Phan, and Katie Whitson. Here we see the direct impact of programs like SAS (Statistical Analysis System) in the application of machine learning, we see this on page 3 of Machine Learning in the Analytic Landscape (not a page 3 of the Sun by the way). Here we see for the government “Pattern recognition in images and videos enhance security and threat detection while the examination of transactions can spot healthcare fraud“, you might think it is no big deal. Yet you are forgetting that it is more than the so called implied ‘healthcare fraud‘. It is the abused setting of fraud in general and the eagerly awaited setting for ‘miscommunication’ whilst the people en mass are now set in a wrongly categorised world, a world where assumption takes control and scores of people are now pushed into the defence of their actions, an optional change towards ‘guilty until proven innocent’ whilst those making assumptions are clueless on many occasions, now are in an additional setting where they believe that they know exactly what they are doing. We have seen these kinds of bungles that impacted thousands of people in the UK and Australia. It seems that Canada has a better system where every letter with the content: ‘I am sorry to inform you, but it seems that your system made an error‘ tends to overthrow such assumptions (Yay for Canada today). So when we are confronted with: “The level of scrutiny all benefits claimants feel under is so brutal that it is no surprise that supermarket giant Sainsbury’s has a policy to share CCTV “where we are asked to do so by a public or regulatory authority such as the police or the Department for Work and Pensions”“, it is not merely the policy of Sainsbury, it is what places like the Department for Work and Pensions are going to do with machine learning and their version of classifications, whilst the foundation of true fraud is often not clear to them, so you want to set a system without clarity and hope that the machine will constitute learning through machine learning? It can never work, that evidence is seen as the initial classification of any person in a fluidic setting is altering on the best of conditions. Such systems are not able to deal with the chaotic life of any person not in a clear lifestyle cycle and people on pensions (trying to merely get by) as well as those who are physically or mentally unhealthy. These are merely three categories where all kind of cycles of chaos tend to intervene with their daily life. Those are now shown to be optionally targeted with not just a flawed system, but with a system where the transient workforce using those methods are unclear on what needs to be done as the need changes with every political administration. A system under such levels of basic change is too dangerous to get linked to any kind of machine learning. I believe that Jay Watts is not misinforming us; I feel that even the writer here has not yet touched on many unspoken dangers. There is no fault here by the one who gave us the opinion piece, I personally believe that the quote “they become imprisoned in their homes or in a mental state wherein they feel they are constantly being accused of being fraudulent or worthless” is incomplete, yet the setting I refer to is mentioned at the very end. You see, I believe that such systems will push suicide rates to an all-time high. I do not agree with “be too kind a phrase to describe what the Tories have done and are doing to claimants. It is worse than that: it is the post-apocalyptic bleakness of poverty combined with the persecution and terror of constantly feeling watched and accused“. I believe it to be wrong because this is a flaw on both sides of the political aisle. Their state of inaction for decades forced the issue out and as the NHS is out of money and is not getting any money the current administration is trying to find cash in any way that they can, because the coffers are empty, which now gets us to a BBC article from last year.

At, we saw “A survey in 2013 by Ipsos Mori suggested people believed that £24 out of every £100 spent on benefits was fraudulently claimed. What do you think – too high, too low?
Want to know the real answer? It is £1.10 for every £100
“. That is the dangerous political setting as we should see it; the assumption and believe that 24% is set to fraud when it is more realistic that 1% might be the actual figure. Let’s not be coy about it, because out of £172.3bn a 1% amount still remains a serious amount of cash, yet when you set it against the percentage of the UK population the amount becomes a mere £25 per person, it merely takes one prescription to get to that amount, one missed on the government side and one wrongly entered on the patients side and we are there. Yet in all that, how many prescriptions did you the reader require in the last year alone? When we get to that nitty gritty level we are confronted with the task where machine learning will not offer anything but additional resources to double check every claimant and offense. Now, we should all agree that machine learning and analyses will help in many ways, yet when it comes to ‘Claimants often feel unable to go out, attempt voluntary work or enjoy time with family for fear this will be used against them‘ we are confronted with a new level of data and when we merely look at the fear of voluntary work or being with family we need to consider what we have become. So in all this we see a rightful investment into a system that in the long run will help automate all kinds of things and help us to see where governments failed their social systems, we see a system that costs hundreds of millions, to look into an optional 1% loss, which at 10% of the losses might make perfect sense. Yet these systems are flawed from the very moment they are implemented because the setting is not rational, not realistic and in the end will bring more costs than any have considered from day one. So in the setting of finding ways to justify a 2015 ‘The Tories’ £12bn of welfare cuts could come back to haunt them‘, will not merely fail, it will add a £1 billion in costs of hardware, software and resources, whilst not getting the £12 billion in workable cutbacks, where exactly was the logic in that?

So when we are looking at the George Orwell edition of edition of ‘Twenty Eighteen‘, we all laugh and think it is no great deal, but the danger is actually two fold. The first I used and taught to students which gets us the loss of choice.

The setting is that a supermarket needs to satisfy the need of the customers and the survey they have they will keep items in a category (lollies for example) that are rated ‘fantastic value for money‘ and ‘great value for money‘, or the top 25th percentile of the products, whatever is the largest. So in the setting with 5,000 responses, the issue was that the 25th percentile now also included ‘decent value for money‘. So we get a setting where an additional 35 articles were kept in stock for the lollies category. This was the setting where I showed the value of what is known as User Missing Values. There were 423 people who had no opinion on lollies, who for whatever reason never bought those articles, This led to removing them from consideration, a choice merely based on actual responses; now the same situation gave us the 4,577 people gave us that the top 25th percentile only had ‘fantastic value for money‘ and ‘great value for money‘ and within that setting 35 articles were removed from that supermarket. Here we see the danger! What about those people who really loved one of those 35 articles, yet were not interviewed? The average supermarket does not have 5,000 visitors, it has depending on the location up to a thousand a day, more important, when we add a few elements and it is no longer about supermarkets, but government institutions and in addition it is not about lollies but Fraud classification? When we are set in a category of ‘Most likely to commit Fraud‘ and ‘Very likely to commit Fraud‘, whilst those people with a job and bankers are not included into the equation? So we get a diminished setting of Fraud from the very beginning.

Hold Stop!

What did I just say? Well, there is method to my madness. Two sources, the first called (no idea who they were), gave us a reference to a 2009 book called ‘Insidious: How Trusted Employees Steal Millions and Why It’s So Hard for Banks to Stop Them‘ by B. C. Krishna and Shirley Inscoe (ISBN-13: 978-0982527207). Here we see “The financial crisis appears to be exacerbating fraud by bank employees: a new survey found that 72 percent of financial institutions say that in the last 12 months they have experienced a case of data theft by one of their workers“. Now, it is important to realise that I have no idea how reliable these numbers are, yet the book was published, so there will be a political player using this at some stage. This already tumbles to academic reliability of Fraud in general, now for an actual reliable source we see KPMG, who gave us last year “KPMG survey reveals surge in fraud in Australia“, with “For the period April 2016 to September 2016, the total value of frauds rose by 16 percent to a total of $442m, from $381m in the previous six month period” we see number, yet it is based on a survey and how reliable were those giving their view? How much was assumption, unrecognised numbers and based on ‘forecasted increases‘ that were not met? That issue was clearly brought to light by the Sydney Morning Herald in 2011 (at, where we see: “the Australian Content Industry Group (ACIG), released new statistics to The Age, which claimed piracy was costing Australian content industries $900 million a year and 8000 jobs“, yet the issue is not merely the numbers given, the larger issue is “the report, which is just 12 pages long, is fundamentally flawed. It takes a model provided by an earlier European piracy study (which itself has been thoroughly debunked) and attempts to shoe-horn in extrapolated Australian figures that are at best highly questionable and at worst just made up“, so the claim “4.7 million Australian internet users engaged in illegal downloading and this was set to increase to 8 million by 2016. By that time, the claimed losses to piracy would jump to $5.2 billion a year and 40,000 jobs” was a joke to say the least. There we see the issue of Fraud in another light, based on a different setting, the same model was used, and that is whilst I am more and more convinced that the European model was likely to be flawed as well (a small reference to the Dutch Buma/Stemra setting of 2007-2010). So not only are the models wrong, the entire exercise gives us something that was never going to be reliable in any way shape or form (personal speculation), so in this we now have the entire Machine learning, the political setting of Fraud as well as the speculated numbers involved, and what is ‘disregarded’ as Fraud. We will end up with a scenario where we get 70% false positives (a pure rough assumption on my side) in a collective where checking those numbers will never be realistic, and the moment the parameters are ‘leaked’ the actual fraudulent people will change their settings making detection of Fraud less and less likely.

How will this fix anything other than the revenue need of those selling machine learning? So when we look back at the chapter of Modern Applications of Machine Learning we see “Deploying machine learning models in real-time opens up opportunities to tackle safety issues, security threats, and financial risk immediately. Making these decisions usually involves embedding trained machine learning models into a streaming engine“, that is actually true, yet when we also consider “review some of the key organizational, data, infrastructure, modelling, and operational and production challenges that organizations must address to successfully incorporate machine learning into their analytic strategy“, the element of data and data quality is overlooked on several levels, making the entire setting, especially in light of the piece by Jay Watts a very dangerous one. So the full title, which is intentionally did not use in the beginning ‘No wonder people on benefits live in fear. Supermarkets spy on them now‘, is set wholly on the known and almost guaranteed premise that data quality and knowing that the players in this field are slightly too happy to generalise and trivialise the issue of data quality. The moment that comes to light and the implementers are held accountable for data quality is when all those now hyping machine learning, will change their tune instantly and give us all kinds of ‘party line‘ issues that they are not responsible for. Issues that I personally expect they did not really highlight when they were all about selling that system.

Until data cleaning and data vetting gets a much higher position in the analyses ladder, we are confronted with aggregated, weighted and ‘expected likelihood‘ generalisations and those who are ‘flagged’ via such systems will live in constant fear that their shallow way of life stops because a too high paid analyst stuffed up a weighting factor, condemning a few thousand people set to be tagged for all kind of reasons, not merely because they could be optionally part of a 1% that the government is trying to clamp down on, or was that 24%? We can believe the BBC, but can we believe their sources?

And if there is even a partial doubt on the BBC data, how unreliable are the aggregated government numbers?

Did I oversimplify the issue a little?



Leave a comment

Filed under Finance, IT, Media, Politics, Science

Be not stupid

There is an article in the Guardian. Now, we all agree that anyone has their own views, that has been a given for the longest of times, and those reading my blog know that I have a different view at times, yet for the most, I remained neutral and non-attacking to those with a different view, that’s how I roll.

Today is different, the article “‘Easy trap to fall into’: why video-game loot boxes need regulation” by Mattha Busby (@MatthaBusby) got to me. It is time for people to realise that when you are over 18, you are responsible for your actions. So I have, pretty much, no patience with any American, Reddit user or not, who gives us “a Reddit user who claims to have spent $10,000“. If you are that stupid, you should not be allowed to play video games.

The Setting

To comprehend my anger, you need to realise the setting we see here. You see, loot boxes are not new. This goes all the way back to 1991 when Richard Garfield created Magic, the gathering. I was not really on board in the beginning, but I played the game. The issues connect when you realise how the product was sold. There was a starter kit (which we call the basic game) it will have enough cards to start playing the game as well as the essential cards you need to play it. To get ahead in the game you need to get boosters. Here is where it gets interesting. Dozens of games are working on the principle that Richard Garfield founded. A booster would have 9-13 cards (depending on the game), It would have 1 (read: One) rare card (or better), 3 uncommon cards and the rest would be common cards. I had several of these games I played and in the end (after 20 boosters) it was merely about collecting the rare cards if you wanted a complete set. Some would not care about it and they could play the game. So this is not a new thing, so if you truly spend $10,000 you should not complain. If you have the money it is not an issue, if you did not, you are too stupid for words. In games it is not new either. Mass Effect 3, the best multiplayer game ever (my personal view) had loot boxes as well, I am pretty sure that they were the first. Yes, you could buy them, with money, or with Microsoft credit points. The third option was that you could gather points whilst playing (at the cost of $0) and use these gained points to buy loot boxes, the solution most people used. Over time you would end up with sensational goods to truly slice and dice the opponents, all gained through play time, no extra cash required.

So when I see places like Venture beat (and the Guardian of course) state issues like: “some people, policymakers, and regulators — including the gaming authorities in Belgium and Netherlands — that those card packs have are gambling“. I see these statements as moronic and I regard them as statements of false presentation. You see, that is not what it is about! When you see the attached picture, you see that these cards are sold EVERYWHERE. The issue is that the CCG card games are sold in the shops, which means that revenue is TAXED. The online sales are not and now, policymakers are all up in arms because they lost out on a non-taxable ‘$1.25 billion during its last quarter even without releasing a major new game‘, that is the real issue and they are now all acting in falsehood. So, when I see “I am currently $15,800 in debt. My wife no longer trusts me. My kids, who ask me why I am playing Final Fantasy all the time, will never understand how I selfishly spent money I should have been using for their activities“, as well as “he became addicted to buying in-game perks, which he later described as ‘digital garbage’“. I merely see people without discipline, without proper control. So without any regard for diplomacy I will call them junkies, plain and simple. Junkies who have no idea just how stupid they are. And, since when do we adjust policy for junkies? Since when are the 99% who hold themselves all plenty accountable, have the proper discipline to not overspend and some (like me) never considered loot boxes in a game like Shadow of War, now being held to account, to lessened gaming impact by junkies? Can anyone answer me this?

Now, we need to take into consideration one or two things. Are the FIFA18 loot boxes set in a similar light? That is the one place where (seemingly) FIFA is in the wrong. You see I have been searching to get any info on what is in a FIFA loot box, but there is no information given. I believe that this lack is actually an issue, yet that could be resolved in 24 hours if Electronic Arts would dedicate 1 page (considering it brings them $1.25 billion a quarter) on what is to be found in a loot box (Rare, Uncommon, Common). The second part that I cannot answer (because I am not a soccer fan) is whether the game allows loot boxes to be earned through playing and finally. Can the game be played without loot boxes? It seems like such a small alteration to make and especially when we see the fuss that is being made now. Some additional facts can be seen in Rolling Stone Magazine of all places (at So now that we get a fuss from several nations, nations that have been all open and accepting on games like The Decipher CCG games Star Trek and Star Wars, Magic the Gathering, The Lord of the Rings, My Little Pony, Harry Potter, Pokémon, and that list goes on for some time. In that regard, they are all gambling and in my view, I feel certain that these so called politicians and lime light seekers will do absolutely NOTHING to get anything done because the cards are subject to VAT and the online stuff is lost taxable revenue. That is what I personally see as the foundation of a corrupt administration.

You see, the fact is that it is not gambling. You buy something that is in 3 categories, Rare, Uncommon and Common, you ALWAYS get this in a setting of 1 rare, 3 uncommon and 5 common, which card you get is not a given, it is random, but they will always get that setting. Let’s for example state that the loot box is $7, you get one $3 card, three $1 cards and five $0.20 cards, so how is that gambling? For Electronic Arts, until they update the website to give a precise definition might be in waters that are a little warmer, but that can be fixed by the end of the day. Perhaps they do have such a page, but Google did not find it.

In addition, Venture Beat gave us (at “EA will have to convince policymakers around the world that it is doing enough and that its mechanics are not the same as the kinds of games you’d find in a casino“, which is easy as these policymakers did absolutely nothing to stop CCG’s like Pokémon and My Little Pony (truly games for minors), so we can stat that this was never about the loot box, it was about missed taxable revenue, a side that all the articles seemed to have left in the dark.

The Guardian has one additional gem. With: “A bill introduced in Minnesota last month would prohibit the sale of video games with loot boxes to under-18s and require a severe warning: “This game contains a gambling-like mechanism that may promote the development of a gaming disorder that increases the risk of harmful mental or physical health effects, and may expose the user to significant financial risk.”” Here I am in the middle. I think that Americans are not that bright at times, a point of view supported with the image of paper cups with the text ‘Caution Hot’ to avoid liability if some idiot burns their mouth; we know that sanity is out of the window. Yet the idea that there should be a loot box warning is perhaps not the worst idea. I think that EA could get ahead of the curve by clearly stating in a readable font size that ‘no loot boxes are needed to play the game‘, which is actually a more apt statement (and a true one) for Shadow of War, with FIFA18, I do not know. You see, this is a changed venue, when you can add a world player to your team the equation changes. Yet, does it make it more or less enjoyable? If I play NHL with my Capitals team and I get to add Mario Lemieux and Wayne Gretsky my chances to get the Stanley cup go up, yet is that a real win or is that cheating? That is of course the other side, the side that the game maker Ubisoft enabled in their Assassins Creed series. you could unlock weapons and gear for a mere $4, they clearly stated that the player would be able to unlock the options during the game, yet some people are not really gamers, mere players with a short attention span and they want the hardware upfront. Enter the Civil war with an Uzi and a Remington, to merely coin a setting. Are they gamers, or are they cheaters? It is a fair question and there is no real answer. Some say that the game allowed them to do this, which is fair and some say, you need to earn the kills you make. We can go to it from any direction, yet when we are confronted with mere junkies going on with spending $15,800, adding to a $69 game, we are confronted with people so stupid, it makes me wonder how he got his wife pregnant in the first place. If the given debt $15,800 is true then there should be a paper trail. In that regard I am all for the fact that there should be a spending limit of perhaps $500 a month, a random number but the fact that there is a limit to spend is not the worst idea. In the end, you have to pay for the stuff, so have a barrier at that point could have imposed a limit on the spending. In addition, we can point at the quote “how I selfishly spent money I should have been using for their activities” and how that is the response of any junk to make, ‘Oh! I am so sorry‘, especially after the junk got his/her fix.

The Guardian gives in addition an actual interesting side: “Hawaiian congressman Chris Lee said “are specifically designed to exploit and manipulate the addictive nature of human psychology”“, it is a fair point to make. Are ‘game completionists’ OCD people? Can the loot box be a vessel of wrongdoing? It might, yet that still does not make it gambling or illegal, which gets us to the Minnesota setting of a warning on the box. It is an interesting option and I think that most game makers would not oppose that, because you basically are not keeping loot boxes a secret and that might be a fair call to make, as long as we are not going overboard with messages like: “This game is a digital product, it requires a working computer to install and operate“, because at that point we have gone overboard again. This as a nice contrast against: “In the Netherlands, meanwhile, lawmakers have said that at least four popular games contravene its gambling laws because items gleaned from loot box can be assigned value when they are traded in marketplaces“, which is another issue. you see when you realise that “you can’t sell any digital content that you aren’t authorized to sell” and as we also saw in Venture Beat ““While we forbid the transfer of items and in-game currency outside of the games, we also actively seek to eliminate that where it’s going on in an illegal environment,”“, we see a first part where we can leave it to the Dutch to cater to criminals on any average working day, making the lawmakers (from my personal point of view slightly short sighted).

So, in the end Mattha had a decent article, yet the foundation (the CCG games) which were the creators of the founding concept were left outside the basket of consideration, which is a large booboo, especially when we realise that they are still for sale in all these complaining countries and that in that very same regard these games are not considered gambling, which sets the stage that this was never about gambling, but several desperate EU nations, as well as the US mind you, that they are all realising that loot boxes are billions of close to non-taxable revenues. That is where the issue holds and even as I do not disagree with the honourable men from both Hawaii and Minnesota, the larger group of policy players are all about the money (and the linked limelight), an issue equally left in the dark. There is one issue against Electronic Arts, yet they can fix that before the virtual ink on the web page has dried, so that issue is non-existent as well soon enough.

It’s all in the game and this discussion will definitely be part of the E3 2018, it has reached too many governments not to do so. I reckon that on E3 Day Zero, EA and Ubisoft need to sit down in a quiet room with cold drinks and talk loot box tactics, in that regard they should invite Richard Garfield into their meeting as an executive consultant. He might give them a few pointers to up the profit whilst remaining totally fair to the gamers, a win-win for all I say! Well, not for the politicians and policy makers, but who cares about them? For those who do care about those people, I have a bridge for sale with a lovely view of Balmain Sydney, going cheap today only!


Leave a comment

Filed under Finance, Gaming, IT, Law, Media, Politics

Grand Determination to Public Relation

It was given yesterday, but it started earlier, it has been going on for a little while now and some people are just not happy about it all. We see this (at, with the setting ‘Facebook and Google targeted as first GDPR complaints filed‘, they would be the one of the initial companies. It is a surprise that Microsoft didn’t make the first two in all this, so they will likely get a legal awakening coming Monday. When we see “Users have been forced into agreeing new terms of service, says EU consumer rights body”, under such a setting it is even more surprising that Microsoft did not make the cut (for now). So when we see: “the companies have forced users into agreeing to new terms of service; in breach of the requirement in the law that such consent should be freely given. Max Schrems, the chair of Noyb, said: “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the agree button – that’s not a free choice, it more reminds of a North Korean election process.”“, which is one way of putting it. The GDPR isd a monster comprised of well over 55,000 words, roughly 90 pages. The New York Times (at stated it best almost two weeks ago when they gave us “The G.D.P.R. will give Europeans the right to data portability (allowing people, for example, to take their data from one social network to another) and the right not to be subject to decisions based on automated data processing (prohibiting, for example, the use of an algorithm to reject applicants for jobs or loans). Advocates seem to believe that the new law could replace a corporate-controlled internet with a digital democracy. There’s just one problem: No one understands the G.D.P.R.

That is not a good setting, it tends to allow for ambiguity on a much higher level and in light of privacy that has never been a good thing. So when we see “I learned that many scientists and data managers who will be subject to the law find it incomprehensible. They doubted that absolute compliance was even possible” we are introduced to the notion that our goose is truly cooked. The info is at, and when we dig deeper we get small issues like “GDPR makes its applicability very clear – it will apply to the processing of personal data by controllers and processors in the EU, regardless of whether the processing takes place in the EU or not“, and when we see “Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it” we tend to expect progress and a positive wave, so when we consider Article 21 paragraph 6, where we see: “Where personal data are processed for scientific or historical research purposes or statistical purposes pursuant to Article 89(1), the data subject, on grounds relating to his or her particular situation, shall have the right to object to processing of personal data concerning him or her, unless the processing is necessary for the performance of a task carried out for reasons of public interest“, it reflects on Article 89 paragraph 1, now we have ourselves a ballgame. You see, there is plenty of media that fall in that category, there is plenty of ‘Public Interest‘, yet when we take a look at that article 89, we see: “Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject.“, so what exactly are ‘appropriate safeguards‘ and who monitors them, or who decided on what is an appropriate safeguard? We also see “those safeguards shall ensure that technical and organisational measures are in place in particular in order to ensure respect for the principle of data minimisation“, you merely have to look at market research and data manipulation to see that not happening any day soon. Merely setting out demographics and their statistics makes minimisation an issue often enough. We get a partial answer in the final setting “Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner.” Yet pseudonymisation is not all it is cracked up to be, When we consider the image (at, Consider the simple example of the NHS, as a patient is admitted to more than one hospital over a time period, that research is no longer reliable as the same person would end up with multiple Pseudonym numbers, making the process a lot less accurate, OK, I admit ‘a lot less‘ is overstated in this case, yet is that still the case when it is on another subject, like office home travel analyses? What happens when we see royalty cards, membership cards and student card issues? At that point, their anonymity is a lot less guaranteed, more important, we can accept that those firms will bend over backward to do the right thing, yet at what state is anonymisation expected and what is the minimum degree here? Certainly not before the final reports are done, at that point, what happens when the computer gets hacked? What was exactly an adequate safeguard at that point?

Article 22 is even more fun to consider in light of banks. So when we see: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“, when a person applies for a bank loan, a person interacts and enters the data, when that banker gets the results and we no longer see a approved/denied, but a scale and the banker states ‘Under these conditions I do not see a loan to be a viable option for you, I am so sorry to give you this bad news‘, so at what point was it a solely automated decision? Telling the story, or given the story based on a credit score, where is it automated and can that be proven?

But fear not, paragraph 2 gives us “is necessary for entering into, or performance of, a contract between the data subject and a data controller;” like applying for a bank loan for example. So when is it an issue, when you are being profiled for a job? When exactly can that be proven that this is done to yourself? And at what point will we see all companies reverting to the Apple approach? You no longer get a rejection, no! You merely are not the best fit at present time.

Paragraph 2c of that article is even funnier. So when I see the exception “is based on the data subject’s explicit consent“, We cannot offer you the job until you passed certain requirements that forces us to make a few checks, to proceed in the job application, you will have to give your explicit consent. Are you willing to do that at this time? When it is about a job, how many people will say no? I reckon the one extreme case is dopey the dwarf not explicitly consenting to drug testing for all the imaginable reasons.

And in all this, the NY Times is on my side, as we see “the regulation is intentionally ambiguous, representing a series of compromises. It promises to ease restrictions on data flows while allowing citizens to control their personal data, and to spur European economic growth while protecting the right to privacy. It skirts over possible differences between current and future technologies by using broad principles“, I do see a positive point, when this collapses (read: falls over might be a better term), when we see the EU having more and more issues trying to get a global growth the data restrictions could potentially set a level of discrimination for those inside and outside the EU, making it no longer an issue. What do you think happens when EU people get a massive boost of options under LinkedIn and this setting is not allowed on a global scale, how long until we see another channel that remains open and non-ambiguous? I do not know the answer; I am merely posing the question. I don’t think that the GDPR is a bad thing; I merely think that clarity should have been at the core of it all and that is the part that is missing. In the end the NY Times gives us a golden setting, with “we need more research that looks carefully at how personal data is collected and by whom, and how those people make decisions about data protection. Policymakers should use such studies as a basis for developing empirically grounded, practical rules“, that makes perfect sense and in that, we could see the start, there is every chance that we will see a GDPRv2 no later than early 2019, before 5G hits the ground, at that point the GDPR could end up being a charter that is globally accepted, which makes up for all the flaws we see, or the flaws we think we see, at present.

The final part we see in Fortune (at, you see, even as we think we have cornered it with ‘AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It‘, we need to take one step back, it is not about the AI, it is about machine learning, which is not the same thing. With Machine learning it is about big data, see when we realise that “Big data challenges purpose limitation, data minimization and data retention–most people never get rid of it with big data,” said Edwards. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting… Algorithmic transparency means you can see how the decision is reached, but you can’t with [machine-learning] systems because it’s not rule-based software“, we get the first whiff of “When they collect personal data, companies have to say what it will be used for, and not use it for anything else“, so the criminal will not allow us to keep their personal data, to the system cannot act to create a profile to trap the fraud driven individual as there is no data to learn when fraud is being committed, a real win for organised crime, even if I say so myself. In addition, the statement “If personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process“, which comes close to a near impossibility. In the age where development of AI and using machine learning to get there, the EU just pushed themselves out of the race as they will not have any data to progress with, how is that for a Monday morning wakeup call?


Leave a comment

Filed under IT, Law, Media, Politics, Science