Tag Archives: NASA

Upping the game

Today started with a nice revelation, Microsoft has taken the sales offensive. Even as we were treated to ”Bethesda’s online action role-playing game “Fallout 76” won’t be available on Valve’s Steam platform during beta or when it launches on November“, the story changes when we look at the PC games in the Microsoft store we see: “Pre-order to get access to the Fallout 76 B.E.T.A.“, so it seems that Microsoft is setting the bar really high, in addition for that part the game is equally available on launch day for those who have the Microsoft Game Pass. The Game Pass is $11 a month solution (in Australia); you get no option to buy the pass for a year (as far as I could tell), which is a drag, and you better have the download options (not to mention the storage) before you commit to it, but there is no denying that it is a deal that is way too good to be true. Microsoft even offers a 14 day free trial, which implies that the games are only available to play as long as you are a member (this is speculation!), not unlike the PS Plus setting. The pass has XB1 and Xb360 backward compatible games and it is a HUGE list. It includes a list of the upcoming top games to be released this year makes the Game Pass an essential choice. The Pass at roughly $130 for a year) will include well over $600 of AAA+ top games, yet to be released in 2018. So apart from the download hassle the pass represents hundreds of dollars of saving in this year alone. I personally believe that they messed up some of the visibility and marketing, but that was their choice. The smaller issue is the backward thinking cap of the US, for people outside of the US (Australia for example) games (when bought outright) are roughly 28% more expensive (and that is after I corrected for the exchange rate). There are also ‘shadows’ here. I do not believe it to be, but there are. For example one source gave me “These eleven Xbox Game Pass games are “leaving soon”“, I cannot tell whether they will also be removed if you have added them to your library (so check this when you decide), the second shadow needs to be mentioned as the quote was: “Personally, paying for the Xbox Game Pass program and Xbox Live Gold is quite a monthly cost“, which is ABSOLUTELY BOGUS! The Xbox Live is a service subscription to play multiplayer, so if the Game Pass title has that, then yes, you will need Xbox Live as you always would have needed it. For the simple player part it is not needed, just as the setting is today. In addition ‘quite a monthly cost‘, is silly to say the least, even on a budget, the setting is that you have Xbox live $80 and Game Pass $130, gives us full and complete access to $12,000 worth of games for $210 a year, anyone debating whether that is expensive needs to get their heads examined. Now, there is no way that you will like all games that would be silly. Yet the setting now allows for you to try games at $0 that you would never have bought in the first place, a setting where you can grow the games dimension that you are in. I believe that to be a really great setting. The part not mentioned is of course the downloading time and subscription fees of the internet, even as those prices have been going down, or better stated giving you more download at the same price, it is a cost you need to consider, yet at the setting where you get access to $12,000 in games, which represents more than I have ever bought in a lifetime across the PS3, PS4, Xbox 360 and Xbox One together is an astounding part you must remember. If only Microsoft had thought that hard drive issue through in 2012, things would be even better for them. I still see that as the one Achilles heel in all this, yet with the rumoured new Console (Project Scarlett) announced for 2020, we do know that Game Pass is a long term setting of gaming for Microsoft and whatever sets the console will be optimised for the billions that Game Pass will bring in. In all this we might ramble too early on the storage issue, but it is an issue Microsoft knowingly and willingly ignored and in all this ‘the most powerful console in the world‘ is impacted through it. In addition, I have had the longest issue with Microsoft marketing (for various reasons, so as Microsoft states in Windows Central: “Xbox Scarlett hardware will ‘set the benchmark’“, I tend to get nervous, you see, they have no idea (well some idea) on how gaming evolves, yet in the end, we will not know what will be available by 2022, so at that point any console will be merely on par, 14 months after it is bought. I moved to console gaming as the update for a PC in 2002 went overboard. Processor and graphic card showed that you would need $2500-$3000 to be up to date for high end gaming and that got you roughly 24 months at best. So gaming with the additional $200 a month, as well as updating drivers, patching and whatever else needed made me move more and more towards consoles and the Xbox 360 delivered perfectly for almost 8 years (at $700), so the cost of living was set to the games bought not to the additional cost of upgrading the hardware to play games. An awesome setting, Yes there was the one off for the hard drive (from 20 to 120 gigabyte, at $119 at that time), but it was well spend. In the end I bought 2 Xbox 360’s, the second one was essential as I got another red rings of death 75 hours before the release of Fallout New Vegas, so I went: “Eff That!” and got the one with the 250 GB drive and it still works, so apart from a high blood pressure event once, the Xbox 360 was a golden choice for any gamer. I also had the PS3, which had the option to upgrade the drive as the PS4 had, so in all this the entire hard drive issue was out there for 12 years, ignoring that part (as well as always online bullying) angers me, because there was never any need, for none of it.

Why does it matter?

It is a level of orchestration, pushing people into a direction before they are ready (and perhaps they never will be). In this Cambridge Analytica is a larger hurdle then anyone imagined and the gamers are sketchy under the most stable conditions. Hackers, phishers, cheaters and trolls are always around the corner and it is best seen when you investigate ‘League of Legends’, I never played the game, but the amount of messages giving way that the victims of bullying and trolls are worse off than the perpetrators is why there should be an online ‘off’ switch. It is essential because the resources needed are allegedly not used correctly (debatable if that would have been possible), and the systems do not have the settings to protect players. The option to just play offline for a while is perhaps the only pressure valve that works (not on all games though), so when we look at MailGuard and we get it in regards to Office 365 (just one day old): “The cunning thing about this phishing scam is that once the victim has entered their username and password, the fake login page redirects them to a genuine Microsoft website, so they think that nothing is amiss. Meanwhile, the criminals have collected their login credentials and are able to steal their online identity for all kinds of nefarious purposes, like fraud, invoice falsification and malware spamming“, Microsoft needs to realise that they have a larger issue and they cannot fix it (basically no one can). Well it is possible, some of the kids involved have been identified, and by shooting them in the back of the head and leaving a message with the parents to start taking notice of what their kids are doing you get change, although some might find it a bit extreme (an issue that is probably a setting for the eyes of the beholder).

Why the extreme example?

The issue is not merely being online, the issue is that too much is online and even if we wanted to apply Common Cyber Sense all the time, there will be a hiatus and when it comes, it will be at the wrong moment in the wrong place. At present the actual success rate on finding and convicting cyber criminals is less than 2%, it is even less when we realise that not everything gets reported. It is in that atmosphere that game streaming is about to be set to a much larger extent. A setting that is based on mere authentication and not on non-repudiation (uncertain how achievable that is at present). You show me a company that guarantees you 100% safety and I will introduce you to someone who is lying to you. As the gaming industry is a $100 billion plus market, the issue was forever that gaming was low impact (for the most), people had more often than not a physical copy, there were more and more parts that one had to overcome, so for cyber criminals it was not an interesting market. Yet with the upcoming changes to the gaming environment it changes, all is online, all is set on central servers and that is when BlackMailWare and RansomWare will become a much more lucrative business for those targeting gamers. Even when you think it does not happen, what happens when your online account gets scrambled, your passwords changed from the outside and for a mere 0.01 bitcoin you can get it back. Systems like that are already used, some will consider that paying $88 is preferable to waiting and losing scores, statistics and access to files with the logs of hundreds of hours of playing a game. When you see the time some invested on games like Diablo 3, Skyrim, Fallout 4 and now upcoming Fallout 76 you get the optional setting where ransom might be successful. And the setting of ‘always online’ makes the threat to console gamers a lot more realistic. You merely have to google the issues on League of Legends and World of Warcraft to see the impact and it is much larger than some think it is. You think it is simple and an adult thing to live with, yet when Microsoft has to explain that danger 250,000 times to the non-technological mother and father of a 16 year old playing and suddenly losing all access, perhaps being permabanned in the process as well, at that point the game changes quickly.

Having a decent non-repudiation solution in place might limit the damage to a larger extent, but that system does not exist for gamers, mere authentication and even when upgrading the issue is not the 100 that do, it is the 15,000,000 who haven’t. this is part of the setting that Microsoft faces and it is facing it on a daily basis with Microsoft 365, where the users are (for the most) adults, so when we get to the console it becomes a different setting. This is why the console evolution is a little more treacherous. When the gamer has the option to remain offline (when needed) he/she has options, when forced online they fall away. Sony got hacked a few times (at least twice), with millions of accounts and the details in the open, the damage was larger than some expected and I reckon that most avoided damage was because the overwhelming amount of gamers had physical copies of the game. So offline gaming was never impacted, merely the multiplayers losing a few days of access.

Now, with Game Pass that would not be an issue and the optional overall damage of $210 (two subscriptions) are easily tended to, in the worst case scenario you pay for it twice and a few weeks later it is either refunded, or you are all paid up for +1 year.

Now, let’s change the setting that the Business insider gave us one month ago. With ‘A desperate hacker tried selling US military files for $150 — only to find no one wanted them‘ (at https://www.businessinsider.com.au/hacker-us-military-drone-files-for-sale-2018-7), this seems hilarious, until you consider the following facts, the first one is “The hacker, who is believed to reside in a poverty-stricken country in South America, said his internet connection was slow and that because his bandwidth was limited, he did not download all the files prior to finding a willing buyer“, so it is in a low yield place, the second one is “The hacker also tapped into live footage of surveillance cameras at the US-Mexico border and NASA bases, and an MQ-1 Predator flying over the Gulf of Mexico“, we still have a sense of humour, live camera watching! Yay! Now we add “the vulnerable computers were taken offline, which inadvertently cut off the hacker’s access to the files“, OK, it happens, sometimes a computer has a missed security patch. Now we add ‘a maintenance manual for the MQ-9A Reaper drone, a list of airmen assigned to a Reaper drone unit, manuals on how to suppress improvised explosive devices‘, is seems harmless, right? Yet when you consider that this was a professional setting where the person had access to “documents belonging to a US Air Force service member stationed at the Creech Air Force Base in Nevada, and documents belonging to another service member believed to be in the US Army“, we see the setting where Military security was circumvented, from a close to powerless place into Military hardware. so when we are confronted with “enough knowledge to realise the potential of a very simple vulnerability and use it consistently“, we see the first part, the second part was given with “The Netgear router vulnerability, which dates back to 2016, allowed hackers to access private files remotely if a user’s password is outdated. Despite several firmware updates and countless news articles on the subject, thousands of routers remain vulnerable“, this is a setting involving adults (one would hope), they cannot get their heads right and you are submitting teenagers and gamers (in a non-professional setting) to those exploitations. Microsoft can market all it can, and to some extent they can fix some parts, but the ‘always online‘ will still be out there and that is where the damage gets to the people.

The prosecution fail rate makes it cool and interesting to go after gamers and the many hours of having to download games will at some point present an opening for hackers, that market is growing and it will hit gamers, there is close to 0% avoiding that.

The question becomes, how ready will Microsoft be? How much resources will be impacted on their customer care and customer service when it hits? The Xbox 360 gave them the red rings of death issue (which went it happened to me was fixed awesomely, it merely took 3-4 weeks), which is acceptable as a new console was shipped to me. The setting when it is in cyberspace, the game changes as a million accounts could be affected. Some hackers will be creative and resort to a low corruption setting (like the dBase virus), some will merely download and wipe, the fact is that even if it is resolved, it will take time to resolve and that is where gamers lose patience really really fast. My setting to buy another console to fix it is one example (I had the funds when it happened), yet what happens when you are in the middle of a Diablo 3 season, which is time restrained and someone ransoms your access? In current setting the damage is partially avoidable; the new Scarlett setting leaves the partial part up for debate. In addition, as the number of people resorting to that path increases, the interest to mess with that part becomes a lot more interesting to Cyber criminals.

In this we need to look at the other side too, the Australian Criminal Intelligence Commission (ACIC) gives us “cybercrime is costing the Australian economy up to $1 billion annually in direct costs alone“, when we look global, we see Experian with the quote: “Ransomware attacks, data breaches, theft of intellectual property, sales of counterfeit goods and other illicit activities are generating at least $1.5 trillion in annual revenue“, so globally, when gamers are added to that list of victims, how high will that priority be? Do you think that they get prime time consideration, or will the party line become ‘the best and easiest thing to do is to just start again‘, I was told that by Microsoft when my Xbox one profile got somehow damaged in the first year. Now try the setting with access, invested cash and time and tenfold the amount of open targets. From my personal point of view, when there is an Office 365 impacting against the Xbox Red accounts wiped, how many resources will Microsoft have? I am certain that the business customers get first dibs on whatever they need. Now this last part does not count against Microsoft, it is merely the lesser of two high cost evils, it is reality.

Even as Microsoft is showing that it is upping the game on gaming and consoles, it is also upping to optional damage and hardship to gamers. I say optional, because in the first, we have no idea what that red box will be doing, we have no idea what the settings are for near future gaming (in 16 months) and we do not know how certain changes will actually impact the gaming sphere, but Sony has shown us that the dangers are real.

In the end, we see that Microsoft is upping the game when it comes to gaming, there is no denying it, yet how the future will pan out and whether Microsoft has truly upped the game for gamers is still to be determined. That is not a negative thing, because any expectation for the future is merely speculation, yet the dangers to their gamers will increase by a lot and that part remains the question mark in all this. Some could have been prevented by a lot, but Microsoft is clearly steering into a settings where adherence to ‘always online‘ is the setting they demand, one way or the other. Even if the prison has golden bars, it remains a prison and that part needs to be clear. The fact that gamers do not get the choice in the matter is what matters, not only from the cyber threat side. Congestion is a growing concern on a global scale. Even as Bill Morrow, Chief Executive of NBN Co. was idiotic enough to initially blame gamers for the congestion, the truth is that against 4K Netflix and YouTube, gamers are not even a blip on that radar, yet congestion is a present and growing issue, so there is a problem there too. The system is already under pressure and globally 200 million gamers when a large slice of that pie is set to the streaming and virtual copies of games only come into play, congestion will rear its ugly head and those gamers become more than a mere blip. Consider that Bethesda shipped 12 million units to retailers within the first 24 hours of Fallout 4, and consider that a large chunk of these people will immediately download the game on launch day of Fallout 76. so optionally up to 12 million people all downloading a game that is also stated to be 4K, so we are looking to around 100 GB download, that is merely one game title, it will be in a time when there is plenty to download and even now, as we accept that most are physical copies, the truth is that gaming in that way will add to the congestion in a really big way. Most providers are not ready and it will impact the gamers, Netflix users and Stan (the list goes on for a long time) are merely part of all this traffic. I named Bethesda and they are merely one of many players in all this. Microsoft, Ubisoft, Bethesda, and Electronic Arts; all people pushing (or getting pushed) towards the virtual release only side of things down the track.

Why does this matter now?

One of the big events QuakeCon 2018 starts tomorrow and that will also be the place where more specific information will be given by the actual makers on more than one title by the way. It will be important on how games are moving forward. It is not merely Fallout 76 (one of the biggest titles anticipated) that is in the upper limits of gaming on PC, Xbox One and PlayStation; it would potentially give the direction of where they are going with the Elder Scrolls VI. Merely two Bethesda games that literally has millions of followers, so there is an essential need to take notice of Bethesda for several reasons. This reverts back to Microsoft, because Bethesda games have a huge following on all platforms. It also means that in that setting (set against the rumour that Fallout 76 is online multiplayer only, yet you can play the game alone) any congestion will topple game joy completely. We know that there is enough experience with Elder Scrolls Online, so it is not the setting that Bethesda is going in blind in any of this, but at the same time the gaming dimension is changing at the same time, so that change is impacting in more than one way; that is the push that Microsoft is going for, which is all fine, yet at that that point we will be faced with more outside interference factors and congestion is a real factor, one that players will be confronted with to a much larger degree in the near future.

If Microsoft gets that all right, then it will be picking up momentum in a scary way and at that point the question will be, can Sony match this? I personally love that part, if we see a setting where Sony and Microsoft push each other to new heights is great because in all this, the gamer ALWAYS wins! And over time this push is a realistic one, yet in some places we will optionally see a time where the providers cannot match what the consumers need and that is a new setting for many gamers. In the past we merely accepted what was available, in the new setting you get to play based on what you pay for and that is something we have not been confronted with. Anyone thinking that this will not happen; think again! It might be the selling point for people to switch providers, but there will be a clear setting of borders, borders that set what you can do and that is where we see the overall cost go up, yet to what extent is a clear unknown for now.

 

Advertisements

Leave a comment

Filed under Uncategorized

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science

Fear mongers cannot learn, will the reader?

The technology section of the Guardian had an interesting article (at http://www.theguardian.com/technology/2016/feb/13/artificial-intelligence-ai-unemployment-jobs-moshe-vardi), ‘Would you bet against sex robots? AI ‘could leave half of world unemployed’‘, is that so? So, is the title a reference that 50% is in prostitution, or is there more?

The article starts straight of the bat without delay it gives the quote: “Machines could put more than half the world’s population out of a job in the next 30 years, according to a computer scientist who said on Saturday that artificial intelligence’s threat to the economy should not be understated“.

I remember a similar discussion now 35 years ago. It was 1981, I was working on the defence mainframe and I got the inside scoop how computers would replace people, how those machines would put hardworking people out of a job and a future. In the first 5 years that followed I saw the opposite, yes some work became easier, but that also meant that more work could be done. The decade that followed gave us an entire new region of technology. A region that would open doors that had never been there in the first place.

This technology is not any different, it will open up different doors.

Now, the people in ‘fear’ of it all are not the most half-baked individuals. They include Physicist Stephen Hawking and the tech billionaires Bill Gates and Elon Musk, in addition there is professor Vardi from Rice University, his statement “AI could drive global unemployment to 50%, wiping out middle-class jobs and exacerbating inequality“, I massively disagree here. The words of Elon Musk calling it “our biggest existential threat” and in addition professor Vardi stated “humanity will face an existential challenge“, those two comments are closer to the reality. Yet here too I believe changes will dominate. Consider a few years back, back to the time when I was younger then young (like 900BC roughly), in an age of Greek wars and utter ‘nationalism’ the Olympic truce was created. “Ekecheiria”, was established in Ancient Greece in the 9th century BC through the signing of a treaty by three kings: Iphitos of Elis, Cleosthenes of Pisa and Lycurgus of Sparta. (Source: olympic.org) There was a lull but in 1896 it started again. An event, which origin was to create an option to not be in a war and to compete. Of all the existential angst we have, robots should not be on the list any time soon.

My reasoning?

As we saw the start of recruitment for Mars, a serious recruitment to start colonising mars, we must admit that there are issues on mars, several could be diminished with the use of intelligent robots. Or perhaps the idea that NASA is looking on how to get resources from asteroids, so how about that Android solution? The BBC gives us the speculation on ocean living (at http://www.bbc.com/future/story/20131101-living-on-the-ocean), again an element where we do not thrive, but a robot could pave the way. In my own view, with the massive energy issues, how long until someone has the idea to place paddled wheels above a hydrothermal vent in the ocean to capture it as an energy source? Not the kind of work a person can do, a machine could, and an AI driven one could excel there. Just three places where we could end up with more and not less. Yet Vardi does give an interesting side, if robots replace people to some extent, that value of physicality might be lost. Now ask the bricklayer if he could do something else, would he? There is indeed the danger that physical labour becomes less and less appealing, yet that does not mean it will be gone. It would take at least half a century for things to be completed, whilst in that mean time new evolutions start, new challenges start.

More important, much more important is the one fact people tend to avoid out of fear. But you the reader, if you are over 45, consider that in the near future you will be dead! So will 3 out of 7 of your friends. Yes, the population is growing, yet the age groups are shifting, this implies that robots could be a solution for some of the work areas that do not require academic thinking. All these opportunities, not threats!

So as we see a new iteration of fear, is this version more valid than then the previous one? With that I mean the implementation of the PC. Perhaps having another set of less fear mongering eyes would help. The second part people forget is that fear mongering is also a drain on productivity here. Even as we speak Japan has a lead in this market, as does America. So how about we start getting ahead of the rest, so that is wrong with the commonwealth picking up a robotic skill or two, because one truth remains, once the other player get too much of a lead, the consequence will be that the followers are not considered for the creational jobs here and that is where the real mulah is, the IT explosion taught us that and that field grew a multitude of billionaires, the next technological iteration will do no less.

I am not alone in my way of thinking, the writer Nicholas Carr gives us: “human creativity and intuition in the face of complex problems is essentially irreplaceable, and an advantage over computers and their overly accurate reputation“, which is where the new future will head. Not to create robots, but the creativity to make then excel in extreme places where we could not comprehend until out boundaries are clearly mapped. So how is this news such an eyeopener? Well, when we get back to the beginning we saw “artificial intelligence’s threat to the economy“, as stated, much like the Personal Computer, it will not be a threat, but a solution, an opening into a new arm of the technology sector, even more important, this is not just a IT only field. It will require quality engineers and depending on the application of the scene. This means that we get new challenges, different ones mind you, but not lesser ones?

In that regard, depending on the implementation, it will require analysts, engineers, programmers and a few others on the list of adepts.

all these options and we did not even need to get close to the technological design of the new age cybernetic machines for the purpose of erotic exploration (level 1 at http://www.vanityfair.com/culture/2015/04/sexbots-realdoll-sex-toys), which is nowhere near an AI experience, time will tell how real that field becomes. Consider the age of STD’s we see nowadays. Mycoplasma Genitalium might be the new ‘trend’, as it can be cured with a mere one week setting of anti-biotics. So how long until it evolves into something that does not cure? Yet we do not even have to go that far, consider all the areas where man (or woman) cannot function, the risk too high and the rewards become too low. Here comes the clockwork system (aka the AI robot) and we are back on track.

So I see the robot as a positive wave. For careers, for jobs, for business evolution and for evolving technology. We only need to see the light of creation and we will end up with a lot more options than we bargained for.

 

 

Leave a comment

Filed under Finance, IT, Law, Politics