Category Archives: Science

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Advertisements

Leave a comment

Filed under Finance, IT, Media, Science

Commerce inverted

A decently intelligent salesperson educated me (some time ago) in the concept of think global, act local. it is something to live by for several reasons. It made perfect business sense, yet what I did not know at the time that it came from the consideration towards the health of the entire planet; to take action in communities and cities. It comes from that ‘sane’ period of time when individuals were coming together to protect habitats and the organisms that live within them. It is what founds the event we now call grassroots efforts, occurring on a local level and are primarily run by volunteers and helpers. So when we consider this and in the business sense we see that It asks that employees to consider the global impact of their actions. It can be applied on a near universal scale and it is a setting of common sense as I see it. So why exactly is Microsoft doing the opposite of it by acting global on a local way of thinking?

Now, they are not alone, but they are the most visible one, because that is how they played the game themselves. When you want to consider an eCommerce move, you need to consider what you are up against and adjust your model accordingly. So why exactly do they advertise the new game Shadow of the Tombraider for AU$144 and the Digital download for AU$114, whilst the shops in Sydney are already offering it for AU$79 and a special edition for AU$89? How does a 42GB download (speculated size) become 44% more expensive, whilst getting an actual physical copy in Sydney is stated to be up to 61% more expensive to download from the Microsoft store? So here we saw (all over the E3) ‘pre-order it on the Microsoft store‘ to be slightly too none lucrative for anyone to ever consider it. Another (weaker) example is FIFA19, where the download is a whole AU$2 cheaper than the physical copy. Yes, it seems to make perfect sense that 4-11 hours download to get that game AU$2 cheaper, does it not?

Now, in itself, I have no issues with the Microsoft store, there are several perfect examples where the store comes with awesome deals, absolutely a given, but now, just after the E3, the new games are what counts and that is where we tend to look. OK, not everyone, I saw ‘games coming soon‘ and the entry was the anticipated game ‘We happy few‘, so I wanted to take a look at what it would cost (and when it is released), and guess what, it wasn’t even in there at all. It is just as deceptive as ‘Play FIFA World Cup Free‘, whilst you are taken to the FIFA18 game of AU$24 (which is a good deal) and in the text is somewhere that it is an addition, a free DLC for anyone who has FIFA18, so why not state ‘Free FIFA 18 World cup DLC‘? It would clearly indicate that it is part of FIFA18 and gives out that it comes with a DLC. None of that is seen and Microsoft is not learning how to properly play the game, not to treating gamers like kids, but like the savant controller users most of them are (and many of them are adults). Microsoft needs to up their game by a fair bit at present.

Oh, and before you think that this is all me, that this is merely an error. I first mentioned it in regards to Shadows of War on May 13th in the article ‘It is done!‘ (at https://lawlordtobe.com/2018/05/13/it-is-done/). There the difference was 50%. Microsoft made no adjustments of any kind. Now, let’s be clear, they are not required to do that, yet in light of the evidence we see where buying from the Microsoft Store will regularly be well over 30% more expensive than a physical copy, why would we consider getting new games there unless we needed that title desperately? This gets us to the entire ‘think local, act global‘. When the question becomes: ‘we need x% margin‘, and when it comes from an overpriced place, the equation changes and logic goes out the window (as I personally see it).

So when we tally the issues that Phil Spencer has on his desk, we also feel sorry for the man. Not, pity mind you, I do not give a hoot about giving him pity, his income is likely in line with a fortune 500 CEO, so he is laughing to the bank on payday (every month), yet he does have an awful mess to clean up from the previous sceptre wielding bosses, not a job I envy.

You see, these small matters are important. The gap with Nintendo is getting smaller and when you consider that Fortnite was downloaded 2 million times in the last 24 hours, you get to see the issue. These players will play on route to somewhere, it merely takes a view for others not having a Switch to consider one getting one at the earliest option, the Fortnite clans are also growing the Nintendo Switch population and cross play gives these people options to get the Switch. The bad side for Microsoft is that they buy additional games, non-Xbox games and that is where the hurt begins, because any gamer will initially get 3-4 games, so that takes an additional $300 away from both Sony and Microsoft. And that is not all, what kind of an impact do you think 120 million Fallout shelter users can make? You see part of this is that the top 10 of downloaded games has 5-7 titles with well over a million downloads, those numbers rack up. Anyone with a passion for multiplayer gaming will not ignore millions of gamers, especially when it comes to half a dozen games of multi-player capable titles. The numbers start to add up at that point, so when we see such shifts Microsoft really cannot afford the issues seen in the Microsoft store as they are at present and it has been an issue for a long time. Their only positive side is that Sony made pretty much the same mistake from day one, so there is no competitive issue on that side for them.

That brings us to another side, which to some regard shows Microsoft marketing dropping the ball. To be honest, it took me by surprise as well. We got to see a filmclip at the Bethesda show with a very special edition of Skyrim. We all laughed, yet the joke is on us, so as Business Insider (at https://www.businessinsider.com.au/skyrim-very-special-edition-amazon-echo-alexa-bethesda-video-2018-6) gives us you can actually download the game for your Echo‘, and with Keegan-Michael Key on the sofa, why would you not think it was comedy? Yet when you look at Amazon (at https://www.amazon.com/dp/B07D6STSX8), you get the goods. So there is an actual Skyrim Very Special Edition on Amazon. The movie you can watch again (at https://www.youtube.com/watch?time_continue=1&v=FnEW6dX_BmU). When we read: “Fans have since uploaded videos of themselves playing “Skyrim: Very Special Edition.”“, we see that Bethesda marketing is creating waves in several fields on several places and in places where we never thought to look before. So as we keep on seeing ‘the most powerful console in the world‘, there is a much larger need to adjust view and vision. Even as the hardware is slightly too flawed, the Game Pass, which I tend to call: ‘GamerPass’ is something to work with. Anyone who has the intent of buying more than 2 games a year would be crazy not to get it. No matter the congestion, the hardware flaws and other matters. Game Pass is an almost certain game changer for Microsoft, it will give them time to clean up other matters and it will set the stage for more. So why am I not seeing Game Pass on YouTube and on web pages at least once a day? In the last 3 days I have seen nothing from Microsoft. Anthem, Fallout 76, Summerset, Fortnite and a few others, they all got their advertisement minute in (more than once I might add), not Game Pass though. The digital visibility is everything and Microsoft seems to be blindly staring at some surface (pun intended), how will that help Phil Spencer? I might not be pro Microsoft, yet I remain pro-gaming no matter what format it is on. There lies the setting from both EA Access and Game Pass, to give but a simple example.

None existing example

When printing these ‘credit card funds‘ to buy and enter a code on your console, why do places like Gamestop, JB Hifi, EB Games not have the Game Pass out in front? There seems to be an English version. Why can’t we get a load-n-go Microsoft debit card to use for the Xbox for gamers? All simple implementations of systems that are already in the field, with additional account linking as well as additional download bonuses with every purchase (over a specific amount). If visibility is the essential need of any console, I am confronted with a personal belief that Microsoft Marketing is looking at the wrong surface, the surface of some tablet, not the surface of a 130 billion dollar a year industry. Does Microsoft want to matter or not, that should become the thought on the front of anyone’s mind that has one. I am getting pissed off and angry for the same reason I have been pissed off with Yves Guillemot (he apparently owns Ubisoft) for half a decade. He had an amazing IP and let it go to waste for years. We are starting to see the same thing here and it becomes a much larger field of where Microsoft needs to look. We can agree that to some extent Ubisoft is adjusting its trajectory (last 2 years already), now we see Microsoft starting a similar spiraling downfall (from the gamers point of view). Some things cannot be prevented, but a lot of them can be fixed and change the path for the future. It needs a visionary! The presentation showed that Phil Spencer has vision, but is it enough and will is he fast enough to correct all the previous mistakes (not done by him), that is the part I cannot tell at present. It is also unfair to confront him this strong a mere two days after the E3, but he needs to recognise that the third period is starting and he has 2 goals against him, so he needs to get his star players on the ice and against the teams that are slowing him down, even if it is his own Azure team dragging issues along (a 2014 issue). Now as the game changes, or better stated as Microsoft wants to change the game, they need to be on the ball all the time and that does not seem to be the case (a personal observation). You cannot do this with a static shop 11% the size of an Apple store down the road (less than 100 metres down the road), you do it be creating engagement. You set the stage where everyone can game for an hour and feel the goods, to get the parents involved and show why the Game Pass is the solution, get to the mothers, seeing how AU$120 per year gets them 100 games (valued at an average total of AU$ 7500), and how that value increases year after year, especially on money saved from not buying games.

Get the ‘Consider Game Pass‘ on every digital download card you buy in store, post office and supermarket. Because parents see the ones in the post office and supermarket, these places can start engagement, a path that gives long term visibility. In all honesty, I haven’t seen any of that. Is it merely placement of product? If it is that important, I should see something like that twice a day and not on my console, when I am there, I merely want to start the game I felt like playing.

Oh, and that is not merely my thought, Google has all these free advertising classes on learning to use their products, pretty much stating the same thing. The foundation of digital marketing seems to be missing. So when I get to the start page of a place like JB Hifi (everyone in Australia knows that one), I would care less on seeing ‘Surface Pro‘ every time I get there, There is no mention at all of Game Pass. I can actually search ‘Game Pass’ and I get all kinds of passes and the 19 linked to the Xbox One, not one is about the Game Pass. That is the game! That is how you lose it, by merely not having visibility. Oh, and they are not alone, seeking it on Amazon gives you one option in the ‘Currency & Subscription Cards, Subscription Cards‘ department. It is the 12 months Gold Live subscription. A mere example on how visibility is the key to forward momentum. Sony knows it, Nintendo definitely knows it, and it is time for Microsoft to wake up to the proper digital age. For these examples are all clear pieces of evidence of inverted commerce in the digital age. I’ll let you decide on how many of those corporations stay afloat whilst making a living through applying inverted commerce, if you find one, ask them to send me a postcard.

Was that over the top?

 

Leave a comment

Filed under Finance, Gaming, Media, Science

Why would we care?

New York is all up in sixes and sevens, even as they aren’t really confused, some are not seeing the steps that are following and at this point giving $65 billion for 21st Century Fox is not seen in the proper light. You see, Comcast has figured something out, it did so a little late (an assumption), but there is no replacement for experience I reckon. Yet, they are still on time to make the changes and it seems that this is the path they will be walking on. So when we see ‘Comcast launches $65bn bid to steal Murdoch’s Fox away from Disney‘, there are actually two parties to consider. The first one is Disney. Do they realise what they are walking away from? Do they realise the value they are letting go? Perhaps they do and they have decided not to walk that path, which is perfectly valid. The second is the path that Comcast is implied to be walking on. Is it the path that they are planning to hike on, or are they merely setting the path for facilitation and selling it in 6-7 years for no less than 300% of what it is now? Both perfectly valid steps and I wonder which trajectory is planned, because the shift is going to be massive.

To get to this, I will have to admit my own weakness here, because we all have filters and ignoring them is not only folly, it tends to be an anchor that never allows us to go forward. You see, in my view the bulk of the media is a collection of prostitutes. They cater in the first to their shareholders, then there stakeholders and lastly their advertisers. After that, if there are no clashes, the audience is given consideration. That has been the cornerstone of the media for at least 15 years. Media revolves around circulation, revenue and visibility, whatever is left is ‘pro’ reader, this is why you see the public ‘appeal’ to be so emotionally smitten, because when it is about emotion, we look away, we ignore or we agree. That is the setting we all face. So when a step like this is taken, it will be about the shareholders, which grows when the proper stakeholders are found, which now leads to advertising and visibility. Yet, how is this a given and why does it matters? The bottom dollar will forever be profit. Now from a business sense that is not something to argue with, this world can only work on the foundation of profit, we get that, yet newspapers and journalism should be about proper informing the people, and when did that stop? Nearly every paper has investigative journalism, the how many part is more interesting. I personally belief that Andrew Jennings might be one of the last great investigative journalists. It is the other side of the coin that we see ignored, it is the one that matters. The BBC (at https://www.bbc.co.uk/programmes/b06tkl9d) gives us: “Reporter Andrew Jennings has been investigating corruption in world football for the past 15 years“, the question we should ask is how long and how many parties have tried to stop this from becoming public, and how long did it take Andrew Jennings to finally win and this is just ONE issue. How many do not see the light of day? We look at the Microsoft licensing corruption scandal and we think it is a small thing. It is not, it was a lot larger. Here I have a memory that I cannot prove, it was in the newspapers in the Netherlands. On one day there was a small piece regarding the Buma/Stemra and the setting of accountancy reports on the overuse of Microsoft licenses in governments and municipality buildings and something on large penalty fees (it would have been astronomical). Two days later another piece was given that the matter had been resolved. The question becomes was it really? I believe that someone at Microsoft figured out that this was the one moment where on a national level a shift to Linux would have been a logical step, something Microsoft feared very very much. Yet the papers were utterly silent on many levels and true investigation never took place and after the second part, some large emotional piece would have followed.

That is the issue that I have seen and we all have seen these events, we merely wiped it from our minds as other issues mattered more (which is valid). So I have no grate faith (pun intended) into the events of ‘exposure‘ from the media. Here it is not about that part, but the parts that are to come. Comcast has figured out a few things and 21st Century Fox is essential to that. To see that picture, we need to look at another one, so it is a little more transparent. It also shows where IBM, Google, Apple and some telecom companies are tinkering now.

To see this we need to look at this first image and see what there is, it is all tag based, all data and all via mobile and wireless communication. Consider these elements; over 90% of car owners will have them: ‘Smart Mobility, Smart Parking and Traffic priority‘. Now consider the people who are not homeless: ‘Smart grids, Utility management, hose management like smart fridges, smart TV and data based entertainment (Netflix)‘ and all those having smart house devices running on what is currently labelled as Domotics, it adds up to Megabytes of data per household per day. There will be a run on that data from large supermarket to Netflix providers. Now consider the mix between Comcast and 21 Century Fox. Breaking news, new products and new solutions to issues you do not even realise in matters of eHealth, road (traffic) management and the EU set 5G Joint-Declarations in 2015, with Japan, China, Korea and Brazil. The entire Neom setup in Saudi Arabia gives way that they will soon want to join all this, or whoever facilitates for the Middle East and Saudi Arabia will. In all this with all this technology, America is not mentioned, is that not a little too strange? Consider that the given 5G vision is to give ‘Full commercial 5G infrastructure deployment after 2020‘ (expected 2020-2023).

With a 740 million people deployed, and all that data, do you really think the US is not wanting a slice of data that is three times the American population? This is no longer about billions, this will be about trillions, data will become the new corporate and governmental currency and all the larger players want to be on board. So is Disney on the moral high path, or are the requirements just too far from their own business scope? It is perhaps a much older setting that we see when it is about consumer versus supplier. We all want to consume milk, yet most of us are not in a setting where we can be the supplier of milk, having a cow on the 14th floor of an apartment tends to be not too realistic in the end. We might think that it is early days, yet systems like that require large funds and years to get properly set towards the right approach for deployment and implementation. In this an American multinational mass media corporation would fit nicely in getting a chunk of that infrastructure resolved. consider a news media tagging all the watchers on data that passes them by and more importantly the data that they shy away from, it is a founding setting in growing a much larger AI, as every AI is founded on the data it has and more important the evolving data as interaction changes and in this 5G will have close to 20 times the options that 4G has now and in all this we will (for the most) merely blindly accept data used, given and ignored. We saw this earlier this year when we learned that “Facebook’s daily active user base in the U.S. and Canada fell for the first time ever in the fourth quarter, dropping to 184 million from 185 million in the previous quarter“, yet the quarter that followed the usage was back to 185 million users a day. So the people ended up being ‘very’ forgiving, it could be stated that they basically did not care. Knowing this setting where the bump on the largest social media data owner was a mere 0.5405%; how is this path anything but a winning path with an optional foundation of trillions in revenue? There is no way that the US, India, Russia and the commonwealth nations are not part of this. Perhaps not in some 5G Joint-Declarations, but they are there and the one thing Facebook clearly taught them was to be first, and that is what they are all fighting for. The question is who will set the stage by being ahead of schedule with the infrastructure in place and as I see it, Comcast is making an initial open move to get into this field right and quick. Did you think that Google was merely opening 6 data centres, each one large enough to service the European population for close to 10 years? And from the Wall Street journal we got: “Google’s parent company Alphabet is eyeing up a partnership with one of the world’s largest oil companies, Aramco, to aid in the erection of several data centres across the Middle Eastern kingdom“, if one should be large enough to service 2300% of the Saudi Arabian population for a decade, the word ‘several‘ should have been a clear indication that this is about something a lot larger. Did no one catch up on that small little detail?

In that case, I have a lovely bridge for sale, going cheap at $25 million with a great view of Balmain, first come, first serve, and all responsibilities will be transferred to you the new predilector at the moment of payment. #ASuckerIsBornEachMinute

Oh, and this is not me making some ‘this evil Google‘ statement, because they are not. Microsoft, IBM, and several others are all in that race; the AI is merely the front of something a lot larger. Especially when you realise that data in evolution (read: in real-time motion) is the foundation of its optional cognitive abilities. The data that is updated in real-time, that is the missing gem and 5G is the first setting where that is the set reality where it all becomes feasible.

So why would we care? We might not, but we should care because we are the foundation of all that IP and it will no longer be us. It gives value to the users and consumes, whilst those who are not are no longer deemed of any value, that is not the future, it is the near future and the founding steps for this becoming an actual reality is less than 60 months away.

In the end we might have merely cared too late, how is that for the obituary of any individual?

 

Leave a comment

Filed under Finance, IT, Media, Science

Where we are in gaming

So the E3 is almost done. I saw the EA bit, I was blown away by Bethesda where they ended the presentation with 13.2 seconds announcing the Elder Scrolls VI. A mere teaser, but what a teaser, the crowd went insane on the spot (me included). I reckon that it will be a 2019 release and we will hear a lot more after the release of Fallout 76 later this year. When it comes to Fallout 76 it will be a lot bigger than ever before. It allows for single play, friends play and multiplay. That is merely the first part, the second part is that Fallout 76 is announced to be 4 times the size of Fallout 4, so any Bethesda fan expecting to be well rested by Christmas better start buying stocks and options in Red Bull, as they will need it and lots of it.

There was a lot more announced, most importantly the setting of a new free game, called Blades, an elder scrolls version of Fallout shelter, a very different one, Bethesda went one step further where the game is fully playable in portrait and landscape mode, the view on the game made me desire an immediate update to my mobile (which is falling apart anyway). In addition, Fallout Shelter became available at that point for both PS4 and Nintendo Switch. So Bethesda is not sitting still and a lot of it at no cost at all, showing a level of gamer care that we have not seen to this level before. Bethesda blew us away with the upcoming DLC’s, updates and new games. After that it was time for Microsoft. I have had issues with Microsoft and they are still growing, yet the presentation given was really good. Phil Spencer knows his shit and that of many other players in this field. He knows what it is about and as we saw all kinds of ‘world premieres’, it relied to some degree on both Bethesda and Ubisoft to give some of the goods, but that was not all. I stated it before, I am not a racing fan, but Forza 7 blew me away, it was astounding to see, so I was not ready for what happened next. If Forza 7 is set at as a 90%-91% game, the upcoming Forza Horizons 4 is getting us straight to the 100% mark. They really outdid themselves there. It is set in historical England, all of England and if you think that Forza 7 had the goods, seeing seasons and weather set into the driving, seeing every place go through the 4 seasons, you will see something totally unique and there is no doubt that if it holds up on the Xbox One X on 4K and 60fps, you are in for a treat, even a non-racing fan like me can see that this is something totally new. There were also announcements on gaming houses and developers bought as well as some of the indie developers who are showing excellent products. Phil Spencer is making waves; he is not out of the woods as he has to clean up the mess of two predecessors, so he has his work cut out for him. There was also a less nice part. They did in many cases give not any release date, merely ‘pre-order it at the Microsoft store‘. I personally believe that this is the Microsoft path, a path that was dangerous and I accused them for not being in consideration of gamers. There was more. You see, Microsoft is moving to take the shops out of the equation. They were doing it to some extent (poorly I might add), yet now when we consider Gamerpass “Xbox Game Pass launched back in June, and provides access to more than 100 Xbox One and Xbox 360 games for $10.95 per month“, before you think that this is a lot, consider that you get access to 100 games, with the announced NEW games, we got that it will include the next Halo, Gears of War, and Forza on launch day. So that is a massive teaser, yet I am also scared of the intentions of Microsoft. I have seen this before. You see, TechAU gave away the speculated goods with “selling games is no longer an option. With console hard drive storage sizes increasing to 1-2TB, its possible we need to rethink game ownership completely. The big question will be the games available. If they’re all games from 6-12 months ago, it may still be seen as a good opportunity to play a bunch of games you meant to buy, but never got around to it“, if only it was true, because they already dropped the ball twice on that one. You see, I saw a similar play in the late 80’s by The Evergreen Group, they had government backing and undercut the competition for years, after that when the bulk was gone, the prices went back up and they were close to the only player remaining. It seems that Microsoft is on a similar path and when we saw the Faststart part I got a second jolt of worry, with the stating that they used machine learning to see how gamers play. This implies the profiling of all players, so when exactly did you as a gamer agree to that? You see, when this becomes personalised, it is not about the average player, this is about you as the individual player and I personally believe that the push ‘to pre-order at the Microsoft store‘ is not merely marketing, it is about pushing for online only and take the shops out of the equation. It makes sense from a business point of view, yet you end up with the only IP, the ones they allow you to have, for whatever time you end up having it. I never signed up for that, and even if we love the offering they give for now. When the shops can no longer support this theory, what happens then? How will you feel in 5 years when your IP is based on a monthly rental? It is a dangerous part and for now you think it does not matter, but it does, you see the earlier quote ‘With console hard drive storage sizes increasing to 1-2TB‘, yet the Xbox One X is merely 1TB, so there is that already, then we realise that the 1 TB merely gets you 800 GB (OS and other spaces reserved), so now we see that the previous Gears of War was 103.12 GB, implies that with one game installed, you are down to less than 70%, now add Halo 5: Guardians (97.53GB) and Forza 7 (100GB). So, only 3 games and 50% of the total drive space is gone (those mentioned games were the largest ones).

So when I see the mention of space for 12 games, I wonder how correct it is. Now consider the announced games like Fallout 76, the Division 2, Beyond Good and Evil 2 and wonder what will be left. People will wake up much to soon as they have to reorganise their console drives, way too early in 2018. Consider, not just the games, but the patches as well. Now you start seeing the dangers you as a gamer face. The moment that 120 million gamers start working in an online setting (PS4, XB1 and Switch), how long until the telecom bandwidth prices go up? How affordable will gaming remain? For now it looks great, but the bandwidth fountain will be soured, the impact is not short term when it hits, and the impact will be too great to consider for now and the Telco companies have not even considered the dangers, only their option towards optional revenue. There is supporting evidence. In Australia, its fun loving product called NBN had 27,000 complaints last year alone. If the old setting for every complaints 5 people did not bother, we see a much larger issue. With issues like outage and slow data speeds one number (source: ABC) gives us that at present the growth of 160% of complaints ‘equated to 1 per cent of the activated premises‘, how is that to sit in whilst downloading 100 Gb for your Xbox One X, and that is merely Australia. In places like London in full setting congestion will be a normal thing to worry about. So when we see “Julie Waites said her 85-year-old mother Patricia Alexander has been without a working phone at her Redcliffe home, north of Brisbane, since June when the NBN was connected in the area“, which we see 4 months after the event, there is a much larger issue and Microsoft did not consider the global field, an error they made a few times before and that is the setting that gamers face, So when your achievements are gone because for too long there was an internet issue, consider where your hard earned achievements went off to. I am certain that it is not all Microsoft’s fault, but its short sighted actions in the past are now showing to become the drag regarding gaming.

The one part that Microsoft does care about is its connections to places like Bethesda and Ubisoft, who in their presentation show to be much larger players. We get that this is merely beta and engine stuff, but the presentation of the Division 2 rocked, I am not sure how the Crew 2 will do, but it looked awesome, in addition the EA games looked as sweet as sport games can get on the Xbox One, so they have the goods. Phil Spencer is making waves and he is showing changes, but how trusting will this audience remain to be after a mere two incidents where gaming was not possible due to reasons not in the hands of Microsoft? Their support division stated last year that the uploaded data from my console (not by me, were all the responsibility of the internet provider), are you kidding me? Yet the games do look good, there is no denying that, yet their infrastructure might be the Achilles heel that they face in the coming year. There was also time for the upcoming AC game called Odyssey. It is very similar to the look of Origin in looks. Graphically it is stunning. The view also shows that AC is changing; it has a much larger political impact in the story line and the changes you can make. It is a lot more RPG based than ever before, which as an RPG lover is very much appreciated and with the choice of a male or a female player is also a change for good, unlike AC Syndicate the choice will be for the duration of the game, making it a much larger replayable challenge. The demo shows that there definitely are changes, some are likely gamer requests, the rest seems to be a change to make the game more appealing regarding the play style you choose, but that part is speculation from my side. I would want to be cautious, yet they truly took the game to the next level with AC Origin, which makes me give them the benefit of the doubt. The setting that Ubisoft brings is much stronger than last year, so it could end up being a stellar year for Ubisoft. When we get to Sony, I become a little cautious. Yet even as we saw it in the previous presentation, instead of merely presenting titles, having live music on stage, the music from the games was a really nice touch. I do not know about you the gamer, yet I have been more and more connected to the music as the quality of it has been on the rise, so seeing the performances was well appreciated. It might have started as early as ACII and Oblivion, now we see that good music is a much larger requirement in any game. A much darker the Last of Us 2 (if that was even possible) sets the stage for what is to come. Yet, even as we see awesome presentations of what is to come, I have to admit that Microsoft did have a better presentation. Sony is also playing the ‘store’ setting with PlayStation Store for bonus options. The games are overwhelming and those are merely the exclusive titles. When we consider all that Ubisoft and Square Enix bring to the table, it shows to be a great year for all the PlayStation owners. Yet, the overwhelming advantage that they have over Microsoft is not as much as you would think. The question becomes how heavy the overbearing advantage that the Last of Us 2, Ghost of Tsushima and Spiderman have, yet when set opposite Forza Horizon 4, Halo Infinite and Sea of Thieves I wonder if it remains a large advantage. Sony has more to offer yet the overwhelming exclusive benefit is not really there. So when we look at a new Resident Evil, actually a remade version of Resident Evil 2, we remain happy for the ‘unhealthy’ life diminishing gaming treats that are offered; both consoles will be offering gaming goods we all desire. There is no doubt that gaming revenue will go through the roof and it seems that we are in a setting where games are not just on the rise, the predictions are that they will grow the market in nearly every direction, and we still have to hear from Nintendo, you see that one is important for both Microsoft and Sony. There is little doubt that they will surpass the Xbox One in total sales, yet now it becomes the setting where they might be able to pull this off as early as thanksgiving, a setting Microsoft is not ready for, the ‘most powerful console‘ will optionally get surpassed by the weakest one as Microsoft has not kept its AAA game for close to two years. Three simple changes could have prevented that, yet the view and setting of always online, GamerPass and storage destroyed it, the mere consideration of infrastructure was missed by Americans focused on local (US) infrastructure and forgetting that the optional 92.3% of the desired customer base lives outside of the USA. The simplest of considerations missed, how is that as a hilarious setting? Oh and getting back to the Sony presentation, if you thought God of War surpassed your expectations, it seems (from the demo) that Spiderman is likely to equal if not surpass that event, so there is one issue that the others will have to deal with, the PlayStation players (Xbox One players too) will just have to wait and be overwhelmed with the number of excellent games coming their way before Christmas, because for both of them the list seems to be the largest list ever. I am posting this now and perhaps update a few Nintendo settings, as there are several revelations coming. GeekWire gives us in all this “Microsoft’s Xbox One still faces an uphill climb vs. Sony and Nintendo“, yet the article (at https://www.geekwire.com/2018/e3-2018-analysis-microsofts-xbox-one-still-faces-uphill-climb-vs-sony-nintendo/) misses out. You see, even if we are to agree with “Microsoft has effectively made its own console irrelevant, because even with the Windows Anywhere initiative, there’s no particular reason for a dedicated enthusiast to own an “Xbone” if you already have a PC. There are certainly advantages, such as ease of use, simplicity of play, and couch gaming, but the same money you spend on the Xbox could be going to tune up your computer so you can play the same games in a higher resolution“, we see the truth, but a wrong one. You see ‘the same money you spend on the Xbox could be going to tune up your computer“, is not correct. We need to consider “you can find a large number of 3840×2160-resolution displays in the $300 to $500 range“, as well as “For a better 4K experience, look to the $450 GeForce GTX 1070 Ti, $500 GeForce GTX 1080, and $500 Radeon RX Vega 64, though the Radeon card is still suffering from limited availability and inflated prices. These cards still won’t hit a consistent 60 fps at 4K resolution in the most strenuous modern games“, so you are down for a lot more than the price of the Xbox One X and still not get the promised 60fps that the Xbox One X delivers. And that is before you realise that a TV tends to be 4 times the size of a PC display. The biggest issue that has not been resolved is the mere stupidity of 6mm of space, that allows for a 3TB Seagate BarraCuda, it would have diminished most other issues, now merely evolve the operating system requiring people to be online all the time and Microsoft would have created an optional winning situation. It should not impact the need (or desire) for GamerPass and it would change the curve of obstruction by well over 70% overnight, all that when you consider that there is a $65 difference for 300% storage, something that the 4K community needs. Phil Spencer has one hell of a fight coming his way and if he can counter the Microsoft stupidity shown up to now, he could potentially turn the upcoming number three position around in 2019, making Microsoft a contender again at some point, yet if the short-sighted board of Microsoft is not willing to adhere to some views, they will lose a lot more than just the loss of a few hundred millions of console development, they might lose a large customer population forever, because gamers hold a grudge like no other and if it was not merely the cost of the console, the fact that the games they bought might overtake the total amount spend by close to 3:1, and once gone they will never ever return. That is the stage we see now and even as there is a lot of improvement, where it matter no changes were made. So even as we should all acknowledge that Phil Spencer is a large change for the better, Microsoft needs to do more. They have the benefit that Sony gave a good show, yet not as good as Microsoft. Perhaps the live presentations are the E3 part we all desire, the demos and previews were all great on both systems. In that regard Ubisoft and Bethesda both brought their homerun at the E3 and they are well deserved ones. As both deliver to both consoles there were no losses on either side, only wins for both sides, yet that leaves the small devil in my brain considering the question. If Fallout76 is 4 times the size of Fallout 4 which (according to Eurogamer) ‘required 100GB install sizes as a minimum‘ for 4K. So how much more will Fallout 76 need? It is in that light that we need to look with a 1TB drive, something I saw coming 5 years ago. So now, whomever buys a 1TB system will soon (too soon) stop being happy. That is one of the fights Phil Spencer will face soon enough, an issue that could have been prevented 6 years ago. It is so cool to see all these games coming, whilst we see a storage system supporting merely part of what comes and that is before we see the network congestion as a few million people try to update their game and get access to the networking facilities. It was an issue that haunted Ubisoft with the initial Division in 2016. When we saw ‘I’m still at work, had to stay overtime and I’m really salty because I might not even play today because of all this server downtime‘, I merely stated that they could have seen that one coming a mile away. Ubisoft upgraded everything and I do not expect to see this in the Division 2, yet consider that it is not merely one game. Consider every gamer getting issues when they want to access Gears 5 and Halo Infinite on launch day. That is the issue we could see coming and in all honesty, in most cases it will not even be the fault of Microsoft at all. The evidence was seen in Australia merely a week ago when ABC treated us to “NBN Co chief executive Bill Morrow suggested that “gamers predominantly” were to blame for the congestion across the National Broadband Network. He later clarified that he wasn’t blaming gamers for congestion, but reiterated that they are “heavy users”“, that is the reality setting, where Counter-Strike: Global Offensive and Destiny 2, two games are 49% of the average hourly bandwidth usage, now add Fallout 76, Gears 5, the Division 2, EA Access and Microsoft GamerPass. You still think I am kidding? And that is merely Australia, now add London congestion and when we consider some news sources give us: “London, Singapore, Paris and New York taking top spots when we consider internet congestion“, I reckon that Europe has issues to a much larger extent. When we consider in addition that the Deutsche Welle gave us last January “A new report has found that only a small fraction of German users get the internet speeds that providers promise“, as well as “the problem is only getting worse“. That is the setting Microsoft is starting to push for and the gamers will not be enjoying the dangers that this will bring. Certain high level non thinkers at Microsoft are making this happen and now Phil Spencer will be faced with the mess that needs cleaning up. The part that many have been ignoring and it will hit Microsoft a lot harder, especially when it wants to move away from uphill battles, a sign that we cannot ignore and whilst the plan might be valid in 4-6 years, the shortage that the hardware and infrastructure gives at present will not be solved any day soon and that is counting against Microsoft. The impact will hit Nintendo as well, but not nearly as hard. The evidence is out there, yet some analysts seem to have taken it out of the equation. Is that not an interesting view that many ignored?

So we are moving forward in gaming, no one denies that, but overall, some cards (like always online) were played much too early and it will cost one player a hell of a lot more than they bargained for.

 

Leave a comment

Filed under Gaming, IT, Media, Science

This bull and a red flag

We all have issues that tend to work like a red flag on a bull. We all have them; there is not one exception to that rule. Whether this is good or bad is not a given, it differs for everyone. In my case it seems to be Grenfell. The level of unacceptability, the sheer levels of incompetence that were clearly visible a mere 10 minutes into reading the facts, the evidence and the presented documentations makes this entire situation beyond belief. So when I see ‘Fire brigade faces police inquiry over Grenfell ‘stay put‘ order‘, my nostrils start fuming steam, no kidding! Now, I get that the detectives have to investigate; it is not with them that I have the issue. I understand what needs to be done, yet my anger towards Det Supt Matt Bonner, who is leading the police investigation, will not subside soon. You see, I have seen apartment block fires, well one exactly. Across the street, early morning, I heard screaming, I saw smoke and then the windows frame and all exploded outwards. We stayed put (except those in the burning apartment and their neighbours), the fire was stopped soon thereafter. The issue is that all the tenants in the building were not underfoot for the fire brigade. It makes perfect sense, there was no immediate danger, so running outside when you are not in danger makes no sense. A nice old fashioned building from just past WW2. The damage was limited to the apartment and the charcoaling of the stones and window frames of the people one floor up. That was the damage. So when I see “whether the order could have breached health and safety law“, I am wondering whether Det Supt Matt Bonner is off his bloody rocker! OK, I get it, he has to do this, but when we see that certain parties signed off on the combustible cladding, and according to some sources in the inquiry with additional wrongful installation. I think that focussing on the combustible side is a lot more important than wasting time on the Fire Brigade who might not have been up to scrap on the information that combustible cladding was installed meant for buildings up to 12 meters high according to the Reynobond PE brochure, it states it in there clearly, it also states two parts that should have set the fire hazard warning lights in the heads of EVERY person directly involved in the decision making process of what to install in the Grenfell tower, so that the buildings around it had a better view (I likely will never get over that part of the equation). These levels of failure seen within the first hour, and the London Fire Brigade is treated to ‘the order could have breached health and safety law‘, there is something utterly unacceptable to that. In all this, the council people involved, are any of them in Jail, or getting their nuts roasted in a training fire? We will just tell them to stay put, the fire brigade will be there to save THEM after lunch!

I reckon that this has not happened yet!

I understand the job that Det Supt Matt Bonner has, so when he gives us “The LFB would, as any other organisation involved, have an obligation to conduct their activity in a manner that doesn’t place people at risk. It doesn’t mean that at the moment they have or they haven’t, but that’s where the legislation is most likely to arise if that was an eventuality“, I get that he is doing his job and it is not a nice job to have in this particular part of the entire track, but we all have those moments. Yet, the setting that this is now set into the shackles of the legislation on health and safety law, whilst we see that the construction, unknown to the LFB at that moment was pretty much an actual Roman Candle is not something they were aware of or signed up for. I cannot find the legislation that sets a proper scope for members of the Fire Brigade (I am not saying it does not exist, merely that I could not find it). Yet when I look at the Fire and Rescue Service Operational guidance [attached], we see a few parts (at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/5914/2124406.pdf). Yet that document gave me the Fire and Rescue Services Act 2004. So that is now out of the way, we see (not in the act): “Fire and Rescue Authorities must make arrangements for obtaining necessary information for the purposes of: extinguishing fire and protecting lives and property from fires in its area (Section 7); rescuing and protecting people from harm from road traffic accidents in its area (Section 8)“, this is important, because when we go back to the timeline, we see: ‘Emergency services received the first report of the fire at 00:54‘, it started on the fourth floor and the first Fire brigade teams arrived 6 minutes later (source: the Guardian). The first thing we learn is that firefighters had put out the fire in the flat within minutes. When the crew were leaving the building, they spotted flames rising up the exterior of the building. (source: the Independent), so (at https://www.independent.co.uk/news/uk/home-news/grenfell-tower-how-fire-spread-graphic-a7792661.html) we also see that the setting of stay put was sound, the initial fire was stopped, yet the flames had now gone from inside to outside (between the walls and the combustible cladding), at this point we get to ‘others were told by emergency services over the phone to put towels around doors and stay put until help arrived‘, who were still informed on the one apartment, not the Roman candle scenario. So academically there is clear logic to the setting. The next part is actually important, more important then you realise. The setting is in my personal opinion that the fire brigade was in the dark on what they faced and the scope they faced at the scene. With “A man on the 17th floor, who left his flat at 1.15am, said the fire had reached his window by the time he got out of the building“, this implies that it took 20 minutes for the fire to get from the 4th to the 17th floor. A utterly preposterous setting in any apartment building under normal condition, even under less than optimal condition this would never happen. We know that a room in any apartment can be ablaze in 3-5 minutes, considering that, the apartment itself it not yet ‘all’ in danger. I personally saw the training video for my firefighting accreditation (It’s a Marine Rescue thing). We also know that fire moves upwards, so even as the fire increases in speed and intensity, under normal conditions, it would have taken 5 minutes for any fire to move from the fourth floor to the fifth floor, yet within 6 minutes the initial fire was under attack and stopped. So now you need to realise that it was merely 00:01-00:03, when you realise that it took 12 minutes for the fire to grow from floor 4 to floor 17 that is the unnatural setting, it is pretty much unheard of. We can go with the fact that the fire was never stopped, but the initial stopping would have subsided heat and flammable material becomes a factor too. the fact that this fire was now out of control and in the end there were 200 firefighters and 40 fire engines on the scene. A setting so large, I have never seen any force actively that large on any one building in my life; these are merely a few elements in the setting that we should (respectfully mind you) hit Det Supt Matt Bonner over the head with. It is my personal belief that whoever signed of for the cladding, I do not care for what reason needs to be arrested and should be kept in jail until the entire investigation is completed. You see, I covered it in my article ‘Under cover questions‘ (at https://lawlordtobe.com/2017/06/23/under-cover-questions/), where I also added the Reynobond PE brochure. Yet Arconic, the original source has now removed that brochure from their site, is that not interesting [attached]. Yet I kept a safe backup of the brochure, so we will have that. This gets me back to the page 5 information on the brochure “It’s perfect for new and retrofit projects less than 40 feet (three stories) high“. Now it is important to realise that I am not attacking Arconic, the brochure gives clear light and it is probably a very nice and affordable upgrade solution for small office buildings and modern houses, 40 feet, 12 metres, 3 floors. It makes sense that those that do not have the funds and basically are willing to run the smallest of risks are all fine. Grenfell was 800% larger, higher and in that regard it becomes a much larger risk and in equal regard that product should never have been selected for Grenfell. So who signed off on that part of the equation, because someone approved it. It is my belief that this person needs to get the 4th degree from Det Supt Matt Bonner, not the members from the London Fire Brigade (yes, he is only doing his job, I know!). That setting is still completely (read: largely) uncovered by the media at large. It is not about all the other parts, all the complications that the people behind the screens need to feel that they can get away from it, the simple clear one part that is shown. Who signed off on the use of Reynobond PE for THIS building, it is in my personal view that simple.

So when we see the one time when those exaggerated headlines from places like the Daily Mail are valid, we see ABC giving us (at http://www.abc.net.au/news/2017-06-20/firefighters-hold-back-tears-at-grenfell-tower-fire-memorial/8633348), the setting ‘Video reveals disbelief of firefighters heading into ‘Towering Inferno’‘. So when you watch that video, also consider that these firefighters did not stop, they did not turn back, they all headed straight towards, and some into a roman candle. It might be a small miracle that none of the firefighters lost their lives. The video also showed that whilst the 39 fire engines were on route one filmed the setting where the entire building was already engulfed in flames. So whilst we are hearing the focus on the ‘stay put’, a proven logical, rational and acceptable order for high rise buildings, we need to consider how this could have gone out of control in less than 20 minutes, a setting (as far as I know) never seen before. So as you can see that the setting on the cladding is clearly given with mere common sense. we need to accept that Det Supt Matt Bonner is doing his job, yet from my point of view, the entire setting on looking at optional breaching of health and safety law, the London Fire Brigade is a lot lower on my list regarding the priority in looking on who did what wrong, there are several much higher on the list and perhaps I would not ever have chosen to question them at all. It might be the wrong call for several reasons and I accept that, yet the clear given setting that videos, photos and eye witness accounts give us, I would merely call the LFB in to buy them a beer and congratulate them for not getting themselves killed for working right next to a 67 meter Roman candle for up to 60 hours. Even as the fire was under control after 24 hours, it took another day and a half to fully stop the fires, that is never ever a normal fire, a fact that should be made open and public to a lot of people in the hope that they get angry enough to ask a few elementary questions and make sure that those who signed of on it answer them in front of dozen cameras and microphones.

So now we get back to the Fire and Rescue Services Act 2004, where we see in section 7, the part that I mentioned earlier, with one difference. You see the Fire and Rescue Service Operational guidance is missing one small part. We can agree that it is not an issue for the guidance, but when we see in section 7 part one ‘A fire and rescue authority must make provision for the purpose of extinguishing fires in its area, and protecting life and property in the event of fires in its area‘ we also need to see part 2 in all this. It is there where we see the smallest issue. We see: ‘In making provision under subsection (1) a fire and rescue authority must in particular secure the provision of the personnel, services and equipment necessary efficiently to meet all normal requirements‘, there is more, but this already covers it with the setting of ‘normal requirements‘. I hope we can all agree that there was nothing normal about the Grenfell tower fire. Should we bother to look at part d where we see ‘make arrangements for obtaining information needed for the purpose mentioned in subsection (1)’ as well as part e where we also see ‘make arrangements for ensuring that reasonable steps are taken to prevent or limit damage to property resulting from action taken for the purpose mentioned in subsection (1)‘ we are shown that neither point would have been possible to adhere to, 39 fire engines and 250 London firefighters. None of them would have been alerted by anyone that they were dealing with combustible cladding, they would have realised when they got there, but by then it was far too late to get anyone out alive. An abnormal setting in a place where normality seemingly was thrown out of any window when refurbishment choices were made, a view we get from the Guardian with “But fire-resistant cladding would have raised the cost for the whole building by an estimated £5,000“, a mere £70 per life lost. So when you follow the enquiry (at https://www.grenfelltowerinquiry.org.uk/evidence), I will be most curious to see what Arconic will have to say, you see, even as they (as far as I can tell) had done nothing wrong, the question remains whether the Arconic sales team knew all the facts on the sale of Reynobond PE, you see a building the size of Grenfell needs a lot of panels and when we consider the brochure, ref flags should have appeared in the mind of the salesperson (optionally). When we do look at the opening statement document from Arconic, we get :

  1. The material supplied by the Company for use at Grenfell Tower comprised the following:

(a) Reynobond 55 PE 4mm Smoke Silver Mem) lie E9107S DO 5000 Washcoat — the Arconic order acknowledgements and associated CEP purchase orders confirm the total area of this product purchased for Grenfell Tower as 6586 m2(note that this product was supplied in five different lengths and three different widths); and

(b) Reynobond 55 PE 4mm Pure White A91 10S DG 5000 Washcoat — the Arconic order acknowledgement and associate CEP purchase order confirms the total area of this product purchased for Grenfell Tower was 1 80m2.

  1. In 2015 the translucent ACM PE core was substituted with a carbon black core. This was achieved by adding a small amount of carbon black material to the existing core, which provided greater UV protection for the core at exposed panel edges. The change was not related to fire performance.

So, would carbon be an issue? Now, I am not a firefighter, so I am a little out of my depth here, yet when we look at the thermal conductivity of materials and we see:

MATERIAL CONDUCTIVITY DENSITY
Aluminium 210 2.71
Graphite (pyrolytic, some planes) 300-1500 1.3-1.95
Graphene (theoretical) 5020 n/a
Carbon Nanotube (theoretical) 3500 N/A
Carbon Fiber 21-180 1.78
High Modulus MP Mesophase Pitch Carbon Fiber in fiber direction 500 1.7

So for the most, heat conductivity goes up by a lot when carbon is introduced. I am not accusing of Arconic of doing anything wrong, merely that as UV protection went up, so did the heat conductivity as my personal consideration speculates (a clear assumption from my side at this point). The fact that this happened in 2015 long before the refurbishment, we see an additional danger factor. Even as Reynobond PE was never an acceptable solution according to their own brochure, the fact that over 6500 square meters of the stuff was ordered, did no one question the maximum 12 metres part?

So again we get to the part, who approved the installation of well over 6500 square meters of combustible material turning a high rise building into a 67 meter Roman candle?

I might be the bull and Grenfell is the red flag enraging me to the core, I accept that, I merely wonder why not more people apart from the family of victims are not equally enraged. Part of that makes no sense to me at all, because the next building might have you, your children, your grandchildren or other family members in them.

How would you feel then?

 

 

Leave a comment

Filed under Law, Media, Politics, Science

The Sleeping Watchdog

Patrick Wintour, the Guardian’s diplomatic editor is giving us merely a few hours ago [update: yesterday 13 minutes before an idiot with a bulldozer went through the fiber optical cable] before the news on OPCW. So when we see “a special two-day session in late June in response to Britain’s call to hand the body new powers to attribute responsibility for chemical weapons attacks“, what does that mean? You see, the setting is not complex, it should be smooth sailing, but is it?

Let’s take a look at the evidence, most of it from the Guardian. I raised issues which started as early as March 2018 with ‘The Red flags‘ (at https://lawlordtobe.com/2018/03/27/the-red-flags/), we see no evidence on Russian handling, we see no evidence on the delivery, merely a rumour that ‘More than 130 people could have been exposed‘ (‘could’ being the operative word) and in the end, no fatalities, the target survived. Whilst a mere silenced 9mm solution from a person doing a favour for Russian businessman Sergey Yevgenyevich Naryshkin would have done the trick with no fuss at all. And in Russia, you can’t even perceive the line of Russians hoping to be owed a favour by Sergey Yevgenyevich Naryshkin. In addition, all these months later we still have not seen any conclusive evidence of ANY kind that it was a Russian state based event. Mere emotional speculations on ‘could’ ‘might be‘ as well as ‘expected‘. So where do we stand?

A little later in April, we see in the article ‘Evidence by candlelight‘ (at https://lawlordtobe.com/2018/04/04/evidence-by-candlelight/), the mere conclusion ‘Porton Down experts unable to verify precise source of novichok‘, so not only could the experts not determine the source (the delivery device), it also gives weight to the lack of evidence that it was a Russian thing. Now, I am not saying that it was NOT Russia, we merely cannot prove that it was. In addition, I was able to find several references to a Russian case involving Ivan Kivelidi and Leonard Rink in 1995, whilst the so called humongous expert named Vil Mirzayanov stated ““You need a very high-qualified professional scientist,” he continued. “Because it is dangerous stuff. Extremely dangerous. You can kill yourself. First of all you have to have a very good shield, a very particular container. And after that to weaponize it – weaponize it is impossible without high technical equipment. It’s impossible to imagine.”” I do not oppose that, because it sounds all reasonable and my extended brain cells on Chemical weapons have not been downloaded yet (I am still on my first coffee). Yet in all this the OPCW setting was in 2013: “Regarding new toxic chemicals not listed in the Annex on Chemicals but which may nevertheless pose a risk to the Convention, the SAB makes reference to “Novichoks”. The name “Novichok” is used in a publication of a former Soviet scientist who reported investigating a new class of nerve agents suitable for use as binary chemical weapons. The SAB states that it has insufficient information to comment on the existence or properties of “Novichoks”“, I can accept that the OPCW is not fully up to speed, yet the information from 1995, 16 years earlier was the setting: ““In 1995, a Russian banking magnate called Ivan Kivelidi and his secretary died from organ failure after being poisoned with a military grade toxin found on an office telephone. A closed trial found that his business partner had obtained the substance via intermediaries from an employee of a state chemical research institute known as GosNIIOKhT, which was involved in the development of Novichoks“, which we got from the Standard (at https://www.independent.co.uk/news/uk/crime/uk-russia-nerve-agent-attack-spy-poisoning-sergei-skripal-salisbury-accusations-evidence-explanation-a8258911.html), so when you realise these settings, we need to realise that the OPCW is flawed on a few levels. It is not the statement “the OPCW has found its methods under attack from Russia and other supporters of the Syrian regime“, the mere fact that we see in regarding of Novichoks implies that the OPCW is a little out of their depth, their own documentation implies this clearly (as seen in the previous blog articles), I attached one of them in the article ‘Something for the Silver Screen?‘ (at https://lawlordtobe.com/2018/03/17/something-for-the-silver-screen/), so a mere three months ago, there has been several documents all out in the open that gives light to a flawed OPCW, so even as we accept ‘chemist says non-state actor couldn’t carry out attack‘, the fact that it did not result in fatalities gives us that it actually might be a non-state action, it might not be an action by any ‘friend’ of Sergey Yevgenyevich Naryshkin or Igor Valentinovich Korobov. These people cannot smile, not even on their official photos. No sense of humour at all, and they tend to be the people who have a very non-complementary view on failure. So we are confronted not merely with the danger of Novichoks, or with the fact that it very likely in non-state hands. The fact that there is no defence, not the issue of the non-fatalities, but the fact that the source could not be determined, is the dangerous setting and even as we hold nothing against Porton Down, the 16 year gap shown by the OPCW implies that the experts relied on by places like Porton Down are not available, which changes the landscape by a lot and whilst many will wonder how that matters. That evidence could be seen as important when we reconsider the chemical attacks in Syria on 22nd August 2011, so not only did the US sit on their hands, it is now not entirely impossible that they did not have the skills at their disposal to get anything done. Even as a compound like Sarin is no longer really a mystery, the setting we saw then, gives us the other part. With the Associated Press giving us at the time “anonymous US intelligence officials as saying that the evidence presented in the report linking Assad to the attack was “not a slam dunk.”” Is one part, the fact that all the satellites looking there and there is no way to identify the actual culprit is an important part. You see we could accept that the Syrian government was behind this, but there is no evidence, no irrefutable fact was ever given. That implies that when it comes to delivery systems, there is a clear gap, not merely for Novichoks, making the entire setting a lot less useful. In this the website of the OPCW (at https://www.opcw.org/special-sections/syria-and-the-opcw/) is partial evidence. When we see “A total of 14 companies submitted bids to undertake this work and, following technical and commercial evaluation of the bids, the preferred bidders were announced on 14th February 2014. Contracts were signed with two companies – Ekokem Oy Ab from Finland, and Veolia Environmental Services Technical Solutions in the USA” in light of the timeline, implies that here was no real setting and one was implemented after Ghouta, I find that part debatable and not reassuring. In addition, the fact finding mission was not set up until 2014, this is an issue, because one should have been set up on the 23rd August 2011, even as nothing would have been available and the status would have been idle (for very valid reasons), the fact that the fact finding mission was not set up until 2014, gives light to even longer delays. In addition, we see a part that has no blame on the OPCW, the agreement “Decides further that the Secretariat shall: inspect not later than 30 days after the adoption of this decision, all facilities contained in the list referred to in paragraph 1(a) above;“, perfect legal (read: diplomacy driven) talk giving the user of those facilities 30 days to get rid of the evidence. Now, there is no blame on the OPCW in any way, yet were these places not monitored by satellites? Would the visibility of increased traffic and activities not given light to the possible culprit in this all? And when we look at the paragraph 1(a) part and we see: “the location of all of its chemical weapons, chemical weapons storage facilities, chemical weapons production facilities, including mixing and filling facilities, and chemical weapons research and development facilities, providing specific geographic coordinates;“, is there not the decent chance (if the Syrian government was involved, that ‘all locations‘ would be seen as ‘N-1‘, with the actual used fabrication location used conveniently missing from the list? #JustSaying

It seems to me that if this setting is to be more (professional is the wrong word) capable to be effective, a very different setting is required. You see, that setting becomes very astute when we realise that non-state actors are currently on the table, the danger that a lone wolf getting creative is every bit as important to the equation. the OPCW seems to be in a ‘after the fact‘ setting, whilst the intelligence community needs an expert that is supportive towards their own experts in a pro-active setting, not merely the data mining part, but the option to see flagged chemicals that could be part of a binary toxic setting, requires a different data scope and here we see the dangers when we realise that the ‘after the fact‘ setting with a 16 year gap missing the danger is something that is expensive and equally, useless would be the wrong word, but ‘effective’ it is not, too much evidence points at that. For that we need to see that their mission statement is to ‘implement the provisions of the Chemical Weapons Convention (CWC) in order to achieve the OPCW’s vision of a world that is free of chemical weapons and of the threat of their use‘, yet when we look at the CWC charter we see: ‘The Convention aims to eliminate an entire category of weapons of mass destruction by prohibiting the development, production, acquisition, stockpiling, retention, transfer or use of chemical weapons by States Parties. States Parties, in turn, must take the steps necessary to enforce that prohibition in respect of persons (natural or legal) within their jurisdiction‘, which requires a pro-active setting and that is definitely lacking from the OPCW, raising the issue whether their mandate is one of failure. That requires a very different scope, different budgets and above all a very different set of resources available to the OPCW, or whoever replaces the OPCW, because that part of the discussion is definitely not off the table for now. The Salisbury event and all the available data seems to point in that direction.

 

Leave a comment

Filed under Media, Politics, Science

The gaming E-War is here

The console operators are seeing the light. Even as it comes with some speculation from the writers (me included), we need to try and take a few things towards proper proportions. It is a sign of certain events and Microsoft is dropping the ball again. The CNet news (at https://www.cnet.com/news/xbox-big-fun-deals-e3-week-starts-june-7/) gives us “Microsoft’s big E3 sale on Xbox consoles, games starts June 7“, where we see “Save 50 percent or more on season passes, expansions and DLC and other add-ons“, which sounds good, yet in opposition, some claim that as Microsoft has nothing really new to report (more correctly, much too little to report), they want to maximise sales now hoping to prevent people to move away from the Xbox. I do not completely agree. Even as the setting of no new games is not completely incorrect, the most expected new games tend to not get out in the first month after the E3 (they rarely do), so Microsoft trying to use the E3 to cash in on revenue is perfectly sound and business minded. Out with the old and in with the new as some might say. Yet, Microsoft has been dropping the ball again and again and as more and more people are experiencing the blatant stupidity on the way Microsoft deals with achievements and now we see that these scores are too often unstable (I witnessed this myself), we see that there is a flaw in the system and it is growing, in addition, I found a flaw in several games where achievements were never recognised, implying that the flaw is a lot larger and had been going on for more than just a month or so. The one massive hit that the Xbox360 created is now being nullified, because greed made Microsoft set what I refer to ‘the harassment policy’ of ‘always online‘, this is now backfiring, because it potentially drives people to the PlayStation, who fixed that approach 1-2 years ago (some might prefer the Nintendo Switch). Nintendo needs to fix their one year calendar issue fast before it starts biting them (if they have fixed it, you have my apologies).

Sony is not sitting still either as Cnet reports (at https://www.cnet.com/news/sony-isnt-waiting-for-e3-2018-will-reveal-3-playstation-games-early/), with the quote “Starting Wednesday, June 6, the company will spoil one announcement each and every day for five days in a row. Sony is being tight-lipped about the details, but those announcements will include [censored]“. Yet getting back to Microsoft, they do need and should get recognition for “Up to 75% off select games including Monster Hunter: World, Sea of Thieves and PlayerUnknown’s Battlegrounds“. I admit that a game like monster hunter is an acquired taste, yet 75% off from a 95% rated game like Monster Hunter is just amazing and that game alone is worth buying the Xbox One X for. I only saw the PlayStation edition, yet the impression was as jaw dropping as seeing the 4K edition of AC Origin, so not seriously considering that game at 75% discount is just folly.

The issue is mainly what Microsoft is aiming for (and optionally not telling the gamers). They never made any secret of their desire for the cloud, I have nothing against the cloud, yet when I play games in single player mode, there is no real reason for the cloud (there really is not). So when I see that Microsoft bought GitHub for a little less than 10 billion, we should seriously consider that this is affecting the Xbox One in the future, there is no way around it. Even as we see the Financial Times and the quotes of optional consideration “Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” a claim from CEO Satya Nadella. He can make all the claims he like, yet when we consider that this is a setting of constant updates, upgrades and revisions, we see the possible setting where a gamer faces the hardship that the Oracles DBM’s faced between versions 5 and 7. A possible nearly daily setting of checking libraries, updates and implementations to installed games. Yes, that is the real deal a gamer wants when he/she gets home! (Reminder: the previous part was highly speculative)

As we get presentations from the marketeers, those who brought us ‘the most powerful console on the market‘, they are likely to bring slogans in the future like ‘games that are many times larger than the media can currently hold‘, or perhaps ‘games with the option of bringing additions down the track without charge‘, or my favourite ‘games growing on every level, including smarter enemies‘. All this requires updates and upgrades, yet the basic flaw on the Xbox needing extra drives, extra hardware and power points, whilst increasing the amount of downloads with every month such a system is running is not what we signed up for, because at that point getting a gaming PC is probably the better solution. A business setting aimed at people who wanted to have fun. This is exactly the setting that puts the AU$450 PS4, AU$525 and AU$450 Nintendo Switch on the front of the mind of every gamer soon enough.

The elemental flaw that the system holds is becoming an issue for some and when (or if) they decide to push to the cloud to that extent the issues I give will only grow. Now, I will state that in a multiplayer environment, a GitHub setting has the potential to be ground breaking and my making fun with the slogans I gave in Orange, could be the true devastating settings that will form an entirely new domain in multiplayer gaming. Yet we are not there yet and we will not be there yet for some time to come. Even as Ubisoft is getting better and they did truly push the edge with AC Origin, you only have to think back to The Division, the outages and connection issues. The moment that this hits your console for single player that is the moment when you learned the lesson too late. In similar view we can state that the lessons that we learned with Ubisoft Unity, what I call clearly bad testing and perhaps a marketing push to get the game out too early ‘to satisfy shareholders‘, whilst gamers paid AU$99 for a game needing a ‘mere’ patch, which was stated in the media in 2014 as: “The fourth patch for Assassin’s Creed: Unity arrived yesterday as a sizable 6.7 GB download. At least, that’s the case for non-Xbox One players; some players using the Microsoft console are facing 40 GB downloads for the patch“. Think of that nightmare hitting your console in the future, and with the cloud the issues actually becomes more dangerous as patches were not properly synched and tested. That was the fourth, and that was before 4K gaming became the 4K option on consoles, which would have made the Unity download a speculated 80GB, over 10% of the available space of an empty Xbox One. Now, you must consider that such patches would be enormous on the PS4 pro as well, that whilst Microsoft could have prevented 40% of the issues of the issues we are faced well over a year ago, now consider how you want your gamer life to be. Do you still feel happy at present?

Oh, and Sony is not out of the woods either, even as some are really happy with the PS4Pro, it must be clearly stated that there are enough issues with frame rates on several games, all requiring their own patch, which is not a great setting for Sony to face. Even as the new games are more than likely up to scrap and previously released games like Witcher 3 are still getting patches and upgrades, the fact that God of war had issues was not a great start; the game looked amazing on either system. Still, when it comes to fun, it seems that Nintendo has the jump on both Sony and Microsoft. The Splatoon 2 weapons update (lots more weapons) is just one of the setting that will entice the Nintendo fans not put away their copy of Splatoon 2 any day soon. In addition, Amazon implied that Fallout 76 will be coming to the Nintendo switch, which is a new setting for both Sony and Microsoft. For those imagining that this is a non-issue because of the graphics need to play Metroid Prime on a GameCube and watch it being twice the value that Halo one and two gave on an Xbox (with their much higher resolution graphics). The mistaking belief that high-res graphics are the solution to everything clearly has never seen how innovative gaming on a Nintendo outperforms ‘cool looking images‘ every single time. Now that Bethesda is seeing the light, we could be in for a new age of Vault-Tec exploration, but that is merely my speculated view. That being said, the moment we see Metroid Prime 1 and 2, as well as Pikmin and Mario Sunshine on Switch that will be the day that both my Xbox One and Ps4 will be gathering dust for weeks. These games are that much more fun. I just do hope that it will not overlap with the release of some PS4 games I have been waiting for (like Spiderman), because that in equal measure implies that I need to forgo on hours of essentially needed sleep. Mother Nature tends to be a bitch when it boils down to natural needed solutions (I personally do not belief in a red bull life to play games).

So as we are in the last 4 days before the E3 begins, we are more and more confronted with speculations and anticipation. Cnet was good enough to focus on released facts, which is awesome at present. Yet we are all awaiting the news. That being said, the leaks this year has been a lot larger and revealed information has been on overload too. It might be the first sign that the E3 events could be winding down. There had been noise on the grapevine a few weeks ago, yet I was not certain how reliable that information was. The leaks and pre-release information does imply that E3 is no longer the great secret basket to wait for as it was in previous years. We will know soon, so keep on gaming and no matter which console your heart belongs to, make sure you have fun gaming!

 

Leave a comment

Filed under Gaming, IT, Media, Science