Tag Archives: AI

Fight the Future

Mark Bergen gives us a Bloomberg article. The Sydney Morning Herald took it on (at https://www.smh.com.au/business/companies/inside-huawei-s-secret-hq-china-is-shaping-the-future-20181213-p50m0o.html). Of course the arrest of Meng Wanzhou, chief financial officer of Huawei Technologies is the introduction here. We then get the staging of: “inside Huawei’s Shenzhen headquarters, a secretive group of engineers toil away heedless to such risks. They are working on what’s next – a raft of artificial intelligence, cloud-computing and chip technology crucial to China’s national priorities and Huawei’s future” with a much larger emphasis on “China’s government has pushed to create an industry that is less dependent on cutting-edge US semiconductors and software“, the matters are not wrong, yet they are debatable. When I see ‘China’s national priorities‘ and ‘Huawei’s future‘ we must ask ourselves, are they the same? They might be on the same course and trajectory, but they are not the same. In the end Huawei needs to show commercial power and growth, adhering to China’s national needs are not completely in line with that, merely largely so.

Then we something that is a lot more debatable, when we get: “That means the business would lap $US100 billion in 2025, the year China’s government has set to reach independence in technological production” and by my reckoning, China could optionally reach that in 2021-2022, these three years are important, more important than you realise. Neom in Saudi Arabia, optionally three projects in London, two in Paris, two in Amsterdam and optionally projects in Singapore, Dubai and Bangkok. Tokyo would be perfect, yet they are fiercely competitive and the Japanese feel nationalistic on Japanese and at times more important, driven towards non-Chinese goods. In the end, Huawei would need to give in too much per inch of market share, not worth it I reckon, yet the options that Huawei has available might also include growing the tourist fields where they can grow market share through data service options, especially if the can Google to become part of this (in some places). In the end, the stage is still valid to see Huawei become the biggest 5G player in the field.

Then we get the first part of the main event. With: “It started working on customised chips to handle complex algorithms on hardware before the cloud companies did. Research firm Alliance Bernstein estimates that HiSilicon is on pace for $US7.6 billion in sales this year, more than doubling its size since 2015. “Huawei was way ahead of the curve,” said Richard, the analyst.” we see something that I have tried to make clear to the audience for some time.

June 2018: ‘Telstra, NATO and the USA‘ (at https://lawlordtobe.com/2018/06/20/telstra-nato-and-the-usa/) with: “A failing on more than one level and by the time we are all up to speed, the others (read: Huawei) passed us by because they remained on the ball towards the required goal.

September 2018: ‘One thousand solutions‘ (at https://lawlordtobe.com/2018/09/26/one-thousand-solutions/) with: “we got shown 6 months ago: “Huawei filed 2,398 patent applications with the European Patent Office in 2017 out of a total of 166,000 for the year“, basically 1.44% of ALL files European patents were from that one company.

Merely two of several articles that show us the momentum that Huawei has been creating by stepping away from the iterative mobile business model and leaping technologically ahead one model after the other. If you look at the history of the last few years, Huawei went from P7, Mate 10, Nova 3i and Mate 20 Pro. These 4 models in a lifecycle timeline have been instrumental for them and showing the others that there is fierce competition. The P7, a mere equal to the Samsung Galaxy 4 in its day, yet 43% cheaper for the consumer, and now they are at the Mate 20 Pro, which is 20% cheaper than the Samsung Galaxy Note9 and regarded as better in a few ways. In 4 cycles Huawei moved from optionally a choice to best in the field and still cheaper than most. That is the effect of leaping forward and they are in a place where they can do the same in the 5G field.

We are confronted with the drive with the statement: “Huawei is throwing everything into its cloud package. It recently debuted a set of AI software tools and in October released a new specialised chip, called the Ascend. “No other chip set has this kind of capability of processing,” Qiu said.” This viewed advantage is still a loaded part because there is the fact that China is driven towards growing the AI field, where they, for now have a temporary disadvantage. We might see this as a hindrance, yet that field is only visible in the governmental high end usage that there is and consumers like you and me will not notice this, those who claim it and create some elaborate ‘presentation’ into making the water look muddy. When your life is about Twitter, LinkedIn and Facebook, you will never notice it. In the high end usage, where AI is an issue, they are given the cloud advantage that others cannot offer to the degree that is available to non-governmental players (well, that is what it looks like and that is technologically under consideration, yet it does look really nice).

When we look towards the future of Huawei we clearly see the advantages of the Middle East, especially Saudi Arabia, UAE and optionally Qatar if they play their cards right. Latin America is an option, especially if they start in Argentina, where they could optionally add Uruguay overnight, branching out towards Chile and Paraguay will be next leaving the growth towards Brazil. Yet in that same strategy add Venezuela and Colombia first would enable several paths. The business issue remains, yet being the first to have an additional appeal and if it pisses off the Americans Venezuela gets on board fast often enough. The issue is more than technological. The US still has to prove to the audience that there is a 5G place for them all and the infrastructure does not really allow for it at present, merely the metropolitan areas where the money is, driving inequality in the USA even further.

If visibility is the drive than Huawei is very much on the right track and they are speeding that digital super highway along nicely. Yet in opposition to all this is the final paragraph in the SMH. When we see: ““As long as they stick to the game plan, they still have a lot of room to grow,” he said. “Unless the US manages to get their allies to stop buying them.”” This is a truth and also a reassurance. You see the claim ‘Unless the US manages to get their allies to stop buying them‘, gets us to an American standard. It was given to us by the X-Files in the movie with the same name, or perhaps better stated Chris Carter gave it to us all. The end he gives us: “He is but one man. One man alone cannot fight the future“, it equally applies to governments too, they might try to fight the future, yet in the end, any nation is built from the foundation of people, stupid or not, bright or less so, the larger group can do arithmetic and when we are confronted with a Huawei at $450, or an Apple iPhone at $2350, how many of you are desperately rich enough to waste $1900 more on the same functionality? Even when we add games to the larger three (Facebook, LinkedIn & Twitter), most phones will merely have an optional edge and at $1900? Would you pay for the small 10% difference that 1-3 games optionally offer? And let’s not forget that you will have to add that difference again in 2 years when you think that you need a new phone. The mere contemplation of optimised playing free games at $77 a month makes total sense doesn’t it? So there we see the growth plan of Huawei, offering the top of the mountain at the base price and those in denial making these unsubstantiated ‘security risk’ claims will at some point need to see the issue as Verizon is the most expensive provider in the US, So when I see $110 per month for 24 GB of shared data, whilst I am getting 200GB for $50, I really have to take an effort not to laugh out loud. That is the 5G world, the US faces and whilst there was an option for competitive players in the US, the Huawei block is making sure that some players will rake in the large cash mountain for much longer and there others are making fun of my predictions, and now that I am proven to be correct, they are suddenly incommunicado and extremely silent.

As such, when I predicted that the US is now entering a setting where they end up trailing a field that they once led, we will see a lot of growth of Chinese interests. In all this, do you really think that it will stop at a mere 5G walkie talkie? No, with 5G automation and deeper learning, we will see a larger field of dash boarding, information and facilitation to the people and Huawei will optionally rule that field soon enough, with a few non Americans nipping at their heels for dominance because that is the nature of the beast as well. Progress is a game for the hungry and some players (specifically the US) have forgotten what it was like to be hungry. Australian Telstra made similar mistakes and moved their Share price of $6.49 to $3.08 in the stage of 3 years, a 52% loss of value, and when (not if) Huawei pushed the borders all over the place, those people with a Verizon Protective State of Mind will end up seeing Verizon going in a similar setting, because that is also the consequence of adhering to what I would consider to be a form of nationalistic nepotism. The UK already had its ducks in a row for the longest of times (and that island has less ground to cover, which is a distinct advantage), so there BT has options for now and over time they might adhere to some of their policies as is required, the US is not in that good a position and Huawei merely needs to flash a medium purse of cash to show the people in the US that a place like Buenos Aires can offer the masses more and faster than those on better incomes in the US, because the pricing model allows for such a shift.

In this the problem is not a short term one, even as US giants are supposed to have the advantage, we also see that the workforce is not properly adhered to, the US (and the UK) have a massive, not a large, but a massive disadvantage when it comes to STEM students, a disadvantage that China does not have. The AI field is not something that is solved over the next 3 years, so as those with educations in Science, Technology, Engineering and Mathematics is dwindling to some degree in commonwealth nations and America, China can move full steam as the next generation is pushed into high end ambition and careers. As such the entire AI shortfall against America can be overcome much easier by places like China and India at present. It is not merely the stage of more graduated students; it is about groups of graduated students agreeing on paths towards breakthrough solutions. No matter how savant one student is, a group is always more likely to see the threat and weakness of a certain path and that is where the best solution is found faster.

Will we ‘Fight the Future’?

The issue is not the American polarised view, it is the correctly filtered view that Alex Younger gave us initially, it is not incorrect to have a nationalistic protective view and Alex gave the correct stage on having a national product to use, which is different from the Canadian and Australian path proclaimed. We agree that it is in a national required state to have something this critical solved in a national way (when possible that is), in this the path to have a Huawei 5G stage and then reengineer what is required is not wrong, yet it is optionally with a certain risk and when that path is small enough, it is a solution. The UK is largely absolved as it had BT with the foundations of the paths required, just as Australia has Telstra, yet some countries (like Australia) become too complacent, BT was less complacent and they have knowledge, yet is it advanced enough? We agree that they can get up to speed faster, yet will it be fast enough? I actually do not know, I have no data proving the path in one direction or the other. What is clear is that a race with equal horses provides the best growth against one another, the competitiveness and technological breakthroughs that we have seen for the longest time. That path has largely been made redundant in the US and Australia (I cannot say for certain how that is in Canada).

Even as Huawei is gaining speed and being ahead of it all is still a race by one player, the drive to stay ahead is only visible on the global field, and it is an uncertain path, even if they have all the elements in their favour, what is clear is that this advantage will remain so for the next 5 years and unless certain nations make way for budgets growing the STEM pool by well over 200% their long term disadvantage remains in place.

The versusians

In this stage we need to look in the pro and con Huawei field. In the pro field, as Huawei set the stage for global user growth, which they are seemingly doing, they have the upper hand and they will grow to a user base that grows from servicing a third of the internet users to close to 50%, that path is set with some certainty and as such their advantage grows. In the opposition of that, players like need to step away from the political empty headed failure of enabling the one champion stage of Verizon and Telstra, diversity would give the competitive drive and now it is merely Telstra versus Vodafone/TPG, is means that there will be a technological compromise stage where none of the two surges ahead giving players like Huawei a much larger advantage to fuel growth,

How wrong am I likely to be?

So far I have been close to the mark months in advance compared to the big newspapers only giving partial facts long after I saw it coming, so I feel that I remain on the right track here. The question is not merely who has the 5G stage first, it will be who will facilitate 5G usage more complete and earlier than the others, because that is where the big number of switchers will be found and players like TPG and Vodafone have seen the impact of switchers more than once, so they know that they must be better and more complete than the other brand. Huawei knows it too, they saw that part and are still seeing the impact that goes all the way back to the P7, and that is where Apple also sees more losses, We were informed a mere 9 hours ago: “Piper Jaffray cuts its Apple (NASDAQ:AAPL) price target from $250 to $222 saying that recent supplier guidance cuts suggest “global unit uptake has not met expectations.”” another hit of a loss to face, optionally a mere 11.2% yet in light of the recent losses, they faced, we see what I personally feel was the impact of the ridiculous stage of handing the audience a phone of $2369, optionally 30% more expensive than the choice after that one, even if the number two is not that much less in its ability. The stage where marketeers decide on what the people need, when they all need something affordable. It personally feels like the iMac Pro move, a $20K solution that less than 0.3% of the desktop users would ever need, and most cannot even afford. That is driving the value of Apple down and Huawei knows that this egocentric stage is one that Apple et al will lose, making Huawei the optional winner in many more places after the first 5G hurdles are faced by all.

Do you still think that Apple is doing great? A company that went from a trillion to 700 billion in less than 10 weeks, which is an opportunity for the IOS doubters to now consider Huawei and Samsung, even as Huawei will statistically never get them all, they will get a chunk and the first move is that these users moved away from IOS, and as Android users they are more easily captured towards user hungry players like Huawei by its marketing, that is the field that has changed in the first degree and as people feel comfortable with Huawei, they will not consider getting more Huawei parts (like routers for the internet at home) and that continues as people start moving into the 5G field. You see, we can agree that it is mere marketing (for now), yet Huawei already has its 5G Customer-premises Equipment (as per March 2018). this implies that with: “compatible with 4G and 5G networks, and has proven measured download speeds of up to 2Gbps – 20 times that of 100 Mbps fiber“, that they can buy their router now, remain on 4G and when their local telecom is finally ready, 5G will kick in when the subscription is correct. It is as far as I can tell the first time that government telecom procedures are vastly behind the availability to the consumer (an alleged speculation from my side).

Do you think that gamers and Netflix people will not select this option if made available? That is what is ahead of the coming options and that is the Future that some are fighting. It is like watching a government on a mule trying to do battle with a windmill, the stage seems that ridiculous and as we move along, we will soon see the stage being ‘represented’ by some to state on the dangers that cannot (or are ignored) to be proven.

The moment other devices are set towards the 5G stage, that is when more and more people will demand answers from industrial politicians making certain claims and that is when we see the roller-coaster of clowns and jesters get the full spotlight. This is already happening in Canada (at https://www.citynews1130.com/2018/12/13/huawei-and-5g-experts-clash-on-the-risk-to-canadas-national-security/), where City News (Ottawa) gives us: “I can’t see many circumstances, other than very extreme ones, in which the Chinese government would actually risk Huawei’s standing globally as a company in order to conduct some kind of surveillance campaign“, something I claimed weeks ago, so nice for the Canadian press to catch up here, in addition when we are given: ““This can be used for a lot of things, for manipulation of businesses to harvesting of intellectual property,” Tobok said. “On a national security level, they can know who is where at any given time. They can use that as leverage to jump into other operations of the government.” those people knowingly, willingly and intentionally ignore the fact that Apps can do that and some are doing it already. The iPhone in 2011 did this already. We were given: “Privacy fears raised as researchers reveal file on iPhone that stores location coordinates and timestamps of owner’s movements“, so when exactly was the iPhone banned as a national security hazard? Or does that not apply to any Commonwealth nation when it is America doing it? Or perhaps more recent (January 2018), when Wired gave us: “the San Francisco-based Strava announced a huge update to its global heat map of user activity that displays 1 billion activities—including running and cycling routes—undertaken by exercise enthusiasts wearing Fitbits or other wearable fitness trackers. Some Strava users appear to work for certain militaries or various intelligence agencies, given that knowledgeable security experts quickly connected the dots between user activity and the known bases or locations of US military or intelligence operations.” So when Lt. Walksalot was mapping out that secret black site whilst his Fitbit was mapping that base location every morning job, was the Fitbit banned? Already proven incursions on National security mind you, yet Huawei with no shown transgressions is the bad one. Yes, that all made perfect sense. I will give Wesley Wark, a security and intelligence specialist who teaches at the University of Ottawa a pass when he gives us: “Still, Canada can’t afford to be shut out of the Five Eyes or play a diminished role in the alliance, and if Britain decides to forbid Huawei from taking part in its 5G networks, Canada could not be the lone member to embrace the company“, OK that is about governmental policy, not unlike Alex Younger there is a claim to be made in that case, not for the risk that they are or might be, but the setting that no government should have a foreign risk in place. This is all fine and good, but so far the most transgressions were American ones and that part is kept between the sheets (like catering to IBM for decades), or leaving the matter largely trivialised.

It is pointless to fight the future, you can merely adhere to swaying the direction it optionally faces and the sad part is that this sway has forever been with those needing to remain in power, or to remain in the false serenity that status quo brings (or better stated never brings). True innovation is prevented from taking grasp and giving directional drive and much better speeds and that too is something to consider, merely because innovation drives IP, the true currency of the future and when we deny ourselves that currency we merely devaluate ourselves as a whole. In this we should agree that denying innovation has never ever resulted in a positive direction, history cannot give us one example when this worked out for the best of all.

 

Advertisements

Leave a comment

Filed under Finance, IT, Media, Military, Politics, Science

Paul Simon song appplication

I grew up in the 70’s, actually I started to grow up a lot longer before that, but the 70’s were sweet. It was about music and creativity so without even knowing the years flew by, they were quality years. Things were in a good place for nearly everyone and I looked around on all the wonders that were there to behold. In that time we all knew Simon and Garfunkel and soon thereafter we knew the songs of Paul Simon. The album showed, still a young sprout at that time, dressed in jeans with shirt and hat, an alternative Indiana Jones, who would actually not show for another 6 years, so Paul Simon became a trendsetter too.

In this we take a look at some of the tracks and the impact that their 2018 remastered editions hold.

  1. Still Failing After All These Years

Yes, it is everyone’s favourite piñata of technology. It’s about IBM, who reportedly gives us ‘the 5 percent revenue growth in its latest quarter came from the 10 percent decline in the value of the US dollar‘, which sounds nice, but is IBM not that growing behemoth tailoring Watson, left, right, and south of the border? Well, it seems that this is merely a side play to what the insiders call “we are all familiar with IBM’s strategy to shift sales from traditional low-margin businesses to what it calls “strategic imperatives”, such as cloud services, AI, security, blockchain and quantum computing. However, this is not a separate division, and IBM does not break out the numbers. It claimed that SI revenues were up by 15 percent, or by 10 percent at constant currency. That isn’t impressive in a booming market” (source: ZDNET). I personally think that the further you are away from ‘isn’t impressive’ the better you look, you see, the part not shown here is the one that End Gadget gave us. that is seen with the title ‘IBM’s Watson reportedly created unsafe cancer treatment plans‘, with the additional quote “the AI is still far from perfect: according to internal documents reviewed by health-oriented news publication Stat, some medical experts working with IBM on its Watson for Oncology system found “multiple examples of unsafe and incorrect treatment recommendations”. In one particular case, a 65-year-old man was prescribed a drug that could lead to “severe or fatal haemorrhage” even though he was already suffering from severe bleeding“. Now, we can understand that a system like that will falter at times. Yet the setting could have been presented when the people behind Watson would have taken the knowledge of IT experts that have known since the early 80’s that the application of the GIGO law must always be checked for. The GIGO law, or as it is stated the ‘Garbage In, Garbage Out Law‘ has been available for the sceptical mind for well over three decades.

This is not me in some anti-AI mind. I think that AI can do great things, yet to look at cancer treatment recommendations when the medical world still have to figure out plenty towards cancer in the first place also implies that there will be plenty of untested situations there (and many more unknown elements); so IBM bit of a lot more than they could chew. Now if they hire Rob Becket as a spokesperson, then there is at least the chance that the biting part is taken part of, digesting the amounts of data will be up to IBM, some things they will just have to learn for themselves.

 

  1. My Little Town

Issue skipped as it has religious elements that will set political correctness in an unbalanced nature.

  1. I Do It for Your Love

It might have been a topic, yet with well over 40% getting divorced, I would be required to give an unfaithful setting towards the forecasting of trends, which is where Watson comes into play again and that system will make the wrong anticipation, just like chocolate shoes is likely to have on one of the parties in any marital contract. If that would not have been an issue, we see a long term setting of statistical outliers where any AI and the population at large will reject the setting of the song.

  1. 50 Ways to Irradiate Your Lover

There is a topic we can sing about. We have all seen the setting where the lovers left had to resort to revenge porn to get their jollies up. In all this we see that tinker, tailor, soldier and spy are all involved, the soldier is sued, a major from Fort Bragg. I knew the people there, in many cases not really the most intelligent bunch to say the least, but that does not excuse, ignorance is no defence as any law student might know. So even as Adam Matthew Clark is seemingly involved with an army gynaecologist named Kimberly Rae Barrett, so basically he replaced his porn needs with a woman who knows how to squeeze the tomatoes and knows where they are. In the setting it is still part of that well known 40% and in this we see that the laws have been updated. Tumblr has updated the settings with the mention that explicitly ban hate speech, glorifying violence, and revenge porn will be cast out. No one states that this is a bad idea, yet the setting is that 9/11 this year will be the first day that all that is no longer allowed, so how will that go over?

All great songs and the fact that this album jumped into my mind made perfect sense. In a time when we were all set upon the optional wonders that the world had to bring, we are now set into payback, PayPal, revenge and misstated intentional miscommunications.

It is a setting that tends to be devastating to the creative mind. Not merely a concept, it is a book by Margaret Boden. A part matters in all this, because we see that the Creative mind is more than just a search towards the within. It is also the place where we can surpass ourselves.

Drawing on examples ranging from chaos theory to Coleridge’s theory of imagination; using the idea that creativity involves the exploration of conceptual spaces in people’s minds, we see a description of these spaces and ways of producing new ones. In the setting it is a perpetual engine never stopping, feeding itself iteration after iteration until something completely new is found and that too gets digested by the mind, it curiosity flags require it to do so. So when we consider that the creativity requires a much different handle, we can state the obvious and call Watson to some extent a failure, that is until the medical setting is given the question on constipation, when Watson MD stops for 60 seconds and states ‘It is not out yet!‘, that will be the first victory for IBM, because when the system can set dimensionality past the clinical application of text, only then will it look in directions the creative mind would have considered to find the equation of nature, at that point will it become the path to a victory and that is where their spokesperson (Rob Beckett) really goes to town. when his teeth produces the dam to the water inlet of the New Bong Hydroelectric Power Complex in Pakistan, when the IBM software gets to contemplate water shortage and drought, that will be the victory that IBM needs, it seems to consider the wrong flags in the wrong places and what to do when there is no water is a first step in properly solving the issues. That was seen when the IBM users were confronted with ‘SHUTDOWN -F MAY REBOOT INSTEAD OF HALT‘, so when you restart a power plant, when there is no juice to start, it seems that this is not a biggie, it merely melts a few parts, now consider that the setting is not merely a water plant, but the setting is ‘USERS AFFECTED: All IBM Maximo for Nuclear Power users‘ and we are confronted with “NUC7510-SQL ERROR WHEN FILTERING IN ROUNDS TAB (DUTY STATION (NUC)) ON THE NEW READING DUE DATE FIELD“, now also consider that this is directly linked to: “Maximo for Nuclear Power provides enterprises with best practices for managing all types of nuclear equipment, tracking regulatory requirements, and enhancing operational and work management practices“, is it still merely an academic exercise for you? You see, the basic error is that too many people are developers relaying on black and white truths, they consider the true and the false setting of a flag and nine out of 10 they forget about the null setting of that same flag meaning that essential steps were not properly set, a basic error that everyone (no exception) gets to be confronted with, now also realise that Watson is merely a developed system that is large enough to forget settings because a few thousand flags were wrongfully set (actually unintentional mind you), so when the setting is a stage that is not a cancer treatment, but a nuclear power facility that is AI driven (the wet sexual fantasy of too many IBM board members) then we get a real problem, because it is not the 1000 test scenario’s it is the one we did not consider through natures spasms that gets into the wires and at that point we all go nuts and not merely because of the fallout. So when we are confronted with the settings of mere truths and we add last year’s news “AREVA NP has joined forces with IBM’s Watson IoT advanced analytics platform. This partnership helps utilities implement big data solutions for the nuclear industry. Utilities can use this integrated data intelligence to predict the when, where and why of component operations and performance, as well as the consequences of component issues“, with a false treatment one person bites the dust, what do you think happens when they get it wrong in an operational nuclear power plant? It might have merely three sections, but those sections have a little over 706,329 parts (a really rough estimation) and not all are monitored. Even as I designed a way to meltdown an Iranian nuclear power plant from within without having to go into any control room, I can also tell you that Watson will not be ready for that eventuality. So at that point, when it can be done to any power plant, how dangerous is the setting when we see that those with knowledge are seeing that Watson made critical errors that was given with ‘In one particular case, a 65-year-old man was prescribed a drug that could lead to “severe or fatal haemorrhage” even though he was already suffering from severe bleeding‘, a basic danger not covered by the system, what else might have gone wrong that the doctors did not anticipate? That can happen under any condition to no flaw to the physician in any way. I think that IBM is punching the envelope (not pushing it) to seem more astronomical in their approach. The most basic of marketing flaws in an age where marketing wold never be held accountable. So when you see Chernobyl (CA) USA, and IBM marketing states ‘Not my problem‘, how will you feel (besides irradiated that is)?

Yet there is an upside in all this, because the: ‘Comic Book Authorities’ tell us that glowing in the dark improves road safety for pedestrians at night

Sometimes an old song leads to a new song that shows and teaches us that creativity is more than finding new paths; it is the knowledge that adjusting and evolving old paths that are equally rewarding in many ways.

Leave a comment

Filed under IT, Media, Military, Politics, Science

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science

Waking up 5 years late

I have had something like this, I swear it’s true. It was after I came back from the Middle East, I was more of a ‘party person’ in those days and I would party all weekend non-stop. It would start on Friday evening and I would get home Sunday afternoon. So one weekend, I had gone through the nightclub, day club, bars and Shoarma pit stops after which I went home. I went to bed and I get woken up by the telephone. It is my boss, asking me whether I would be coming to work that day. I noticed it was 09:30, I had overslept. I apologised and rushed to the office. I told him I was sorry that I had overslept and I did not expect too much nose as it was the first time that I had overslept. So the follow up question became “and where were you yesterday?” My puzzled look from my eyes told him something was wrong. It was Tuesday! I had actually slept from Sunday afternoon until Tuesday morning. It would be the weirdest week in a lifetime. I had lost an entire day and I had no idea how I lost a day. I still think back to that moment every now and then, the sensation of the perception of a week being different, I never got over it, now 31 years ago, and it still gets to me every now and then.

A similar sensation is optionally hitting Christine Lagarde I reckon, although if she is still hitting the party scene, my initial response will be “You go girl!

You see with “Market power wielded by US tech giants concerns IMF chief” (at https://www.theguardian.com/business/2018/apr/19/market-power-wielded-by-us-tech-giants-concerns-imf-chief-christine-lagarde) we see the issues on a very different level. So even as we all accept “Christine Lagarde, has expressed concern about the market power wielded by the US technology giants and called for more competition to protect economies and individuals”, we see not the message, but the exclusion. So as we consider “Pressure has been building in the US for antitrust laws to be used to break up some of the biggest companies, with Google, Facebook and Amazon all targeted by critics“, I see a very different landscape. You see as we see Microsoft, IBM and Apple missing in that group, it is my personal consideration that this is about something else. You see Microsoft, IBM and Apple have one thing in common. They are Patent Powerhouses and no one messes with those. This is about power consolidation and the fact that Christine Lagarde is speaking out in such a way is an absolute hypocrite setting for the IMF to have.

You see, to get that you need to be aware of two elements. The first is the American economy. Now in my personal (highly opposed) vision, the US has been bankrupt; it has been for some time and just like the entire Moody debacle in 2008. People might have seen in in ‘the Big Short‘, a movie that showed part of it and whilst the Guardian reported ““Moody’s failed to adhere to its own credit-rating standards and fell short on its pledge of transparency in the run-up to the ‘great recession’,” principal deputy associate attorney general Bill Baer said in the statement“, it is merely one version of betrayal to the people of the US by giving protection to special people in excess of billions and they merely had to pay a $864m penalty. I am certain that those billionaires have split that penalty amongst them. So, as I stated, the US should be seen as bankrupt. It is not the only part in this. The Sydney Morning Herald (at https://www.smh.com.au/business/the-economy/how-trump-s-hair-raising-level-of-debt-could-bring-us-all-crashing-down-20180420-p4zank.html) gives us “Twin reports by the International Monetary Fund sketch a chain reaction of dangerous consequences for world finance. The policy – if you can call it that – puts the US on an untenable debt trajectory. It smacks of Latin American caudillo populism, a Peronist contagion that threatens to destroy the moral foundations of the Great Republic. The IMF’s Fiscal Monitor estimates that the US budget deficit will spike to 5.3 per cent of GDP this year and 5.9 per cent in 2019. This is happening at a stage of the economic cycle when swelling tax revenues should be reducing net borrowing to zero“. I am actually decently certain that this will happen. Now we need to look back to my earlier statement.

You see, if the US borrowing power is nullified, the US is left without any options, unless (you saw that coming didn’t you). The underwriting power of debt becomes patent power. Patents have been set to IP support. I attended a few of those events (being a Master of Intellectual Property Law) and even as my heart is in Trademarks, I do have a fine appreciation of Patents. In this the econometrics of the world are seeing the national values and the value of any GDP supported by the economic value of patents.

In this, in 2016 we got “Innovation and creative endeavors are indispensable elements that drive economic growth and sustain the competitive edge of the U.S. economy. The last century recorded unprecedented improvements in the health, economic well-being, and overall quality of life for the entire U.S. population. As the world leader in innovation, U.S. companies have relied on intellectual property (IP) as one of the leading tools with which such advances were promoted and realized. Patents, trademarks, and copyrights are the principal means for establishing ownership rights to the creations, inventions, and brands that can be used to generate tangible economic benefits to their owner“, as such the cookie has crumbled into where the value is set (see attached), one of the key findings is “IP-intensive industries continue to be a major, integral and growing part of the U.S. economy“, as such we see the tech giants that I mentioned as missing and not being mentioned by Christine Lagarde. It is merely one setting and there are optionally a lot more, but in light of certain elements I believe that patents are a driving force and those three have a bundle, Apple has so many that it can use those patents too buy several European nations. IBM with their (what I personally believe to be) an overvalued Watson, we have seen the entire mess moving forward, presenting itself and pushing ‘boundaries’ as we are set into a stage of ‘look what’s coming’! It is all about research, MIT and Think 2018. It is almost like Think 2018 is about the point of concept, the moment of awareness and the professional use of AI. In that IBM, in its own blog accidently gave away the goods as I see it with: “As we get closer to Think, we’re looking forward to unveiling more sessions, speakers and demos“, I think they are close, they are getting to certain levels, but they are not there yet. In my personal view they need to keep the momentum going, even if they need to throw in three more high exposed events, free plane tickets and all kinds of swag to flim flam the audience. I think that they are prepping for the events that will not be complete in an alpha stage until 2020. Yet that momentum is growing, and it needs to remain growing. Two quotes give us that essential ‘need’.

  1. The US Army signed a 33-month, $135 million contract with IBM for cloud services including Watson IoT, predictive analytics and AI for better visibility into equipment readiness.
  2. In 2017, IBM inventors received more than 1,900 patents for new cloud technologies to help solve critical business challenges.

The second is the money shot. An early estimate is outside of the realm of most, you see the IP Watchdog gave us: “IBM Inventors received a record 9043 US patents in 2017, patenting in such areas as AI, Cloud, Blockchain, Cybersecurity and Quantum Computing technology“, the low estimate is a value of $11.8 trillion dollars. That is what IBM is sitting on. That is the power of just ONE tech giant, and how come that Christine Lagarde missed out on mentioning IBM? I’ll let you decide, or perhaps it was Larry Elliott from the Guardian who missed out? I doubt it, because Larry Elliott is many things, stupid ain’t one. I might not agree with him, or at times with his point of view, but he is the clever one and his views are valid ones.

So in all this we see that there is a push, but is it the one the IMF is giving or is there another play? The fact that banks have a much larger influence in what happens is not mentioned, yet that is not the play and I accept that, it is not what is at stake. There is a push on many levels and even as we agree that some tech giants have a larger piece of the cake (Facebook, Google and Amazon), a lot could have been prevented by proper corporate taxation, but that gets to most of the EU and the American Donald Duck, or was that Trump are all about not walking that road? The fact that Christine has failed (one amongst many) to introduce proper tax accountability on tech giants is a much larger issue and it is not all on her plate in all honesty, so there are a few issues with all this and the supporting views on all this is not given with “Lagarde expressed concern at the growing threat of a trade war between the US and China, saying that protectionism posed a threat to the upswing in the global economy and to an international system that had served countries well“, it is seen in several fields, one field, was given by The Hill, in an opinion piece. The information is accurate it is merely important to see that it has the views of the writer (just like any blog).

So with “Last December, the United States and 76 other WTO members agreed at the Buenos Aires WTO Ministerial to start exploring WTO negotiations on trade-related aspects of e-commerce. Those WTO members are now beginning their work by identifying the objectives of such an agreement. The U.S. paper is an important contribution because it comprehensively addresses the digital trade barriers faced by many companies“, which now underlines “A recent United States paper submitted to the World Trade Organization (WTO) is a notable step toward establishing rules to remove digital trade barriers. The paper is significant for identifying the objectives of an international agreement on digital trade“. This now directly gives rise to “the American Bar Association Section of Intellectual Property Law also requested that the new NAFTA require increased protections in trade secrets, trademarks, copyrights, and patents“, which we get from ‘Ambassador Lighthizer Urged to Include Intellectual Property Protections in New NAFTA‘ (at https://www.jdsupra.com/legalnews/ambassador-lighthizer-urged-to-include-52674/) less than 10 hours ago. So when we link that to the quote “The proposals included: that Canada and Mexico establish criminal penalties for trade secrets violations similar to those in the U.S. Economic Espionage Act, an agreement that Mexico eliminate its requirement that trademarks be visible, a prohibition on the lowering of minimum standards of patent protection“. So when we now look back towards the statement of Christine Lagarde and her exclusion of IBM, Microsoft and Apple, how is she not directly being a protectionist of some tech giants?

I think that the IMF is also feeling the waters what happens when the US economy takes a dip, because at the current debt levels that impact is a hell of a lot more intense and the games like Moody’s have been played and cannot be played again. Getting caught on that level means that the US would have to be removed from several world economic executive decisions, not a place anyone in Wall Street is willing to accept, so that that point Pandora’s Box gets opened and no one will be able to close it at that point. So after waking up 5 years late we see that the plays have been again and again about keeping the status quo and as such the digital rights is the one card left to play, which gives the three tech giants an amount of power they have never had before, so as everyone’s favourite slapping donkey (Facebook) is mentioned next to a few others, it is the issue of those not mentioned that will be having the cake and quality venison that we all desire. In this we are in a dangerous place, even more the small developers who come up with the interesting IP’s they envisioned. As their value becomes overstated from day one, they will be pushed to sell their IP way too early, more important, that point comes before their value comes to fruition and as such those tech giants (Apple, IBM, and Microsoft) will get an even more overbearing value. Let’s be clear they are not alone, the larger players like Samsung, Canon, Qualcomm, LG Electronics, Sony and Fujitsu are also on that list. The list of top players has around 300 members, including 6 universities (all American). So that part of the entire economy is massively in American hands and we see no clear second place, not for a long time. Even as the singled out tech giants are on that list, it is the value that they have that sets them a little more apart. Perhaps when you consider having a go at three of them, whilst one is already under heavy emotional scrutiny is perhaps a small price to pay.

How nice for them to wake up, I merely lost one day once, they have been playing the sleeping game for years and we will get that invoice at the expense of the futures we were not allowed to have, if you wonder how weird that statement is, then take a look at the current retirees, the devaluation they face, the amount they are still about to lose and wonder what you will be left with when you consider that the social jar will be empty long before you retire. The one part we hoped to have at the very least is the one we will never have because governments decided that budgeting was just too hard a task, so they preferred to squander it all away. The gap of those who have and those who have not will become a lot wider over the next 5 years, so those who retire before 2028 will see hardships they never bargained for. So how exactly are you served with addressing “‘too much concentration in hands of the few’ does not help economy“, they aren’t and you weren’t. It is merely the setting for what comes next, because in all this it was never about that. It is the first fear of America that counts. With ‘US ponders how it can stem China’s technology march‘ (at http://www.afr.com/news/world/us-ponders-how-it-can-stem-chinas-technology-march-20180418-h0yyaw), we start seeing that shift, so as we see “The New York Times reported on April 7 that “at the heart” of the trade dispute is a contest over which country plays “a leading role in high-tech industries”. The Wall Street Journal reported on April 12 that the US was preparing rules to block Chinese technology investment in the US, while continuing to negotiate over trade penalties“, we see the shifted theatre of trade war. It will be about the national economic value with the weight of patents smack in the middle. In that regard, the more you depreciate other parts, the more important the value of patents becomes. It is not a simple or easy picture, but we will see loads of econometrics giving their view on all that within the next 2-3 weeks.

Have a great weekend and please do not bother to wake up, it seems that Christine Lagarde didn’t bother waking up for years.

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

The sting of history

There was an interesting article on the BBC (at http://www.bbc.com/news/business-43656378) a few days ago. I missed it initially as I tend to not dig too deep into the BBC past the breaking news points at times. Yet there it was, staring at me and I thought it was rather funny. You see ‘Google should not be in business of war, say employees‘, which is fair enough. Apart from the issue of them not being too great at waging war and roughing it out, it makes perfect sense to stay away from war. Yet is that possible? You see, the quote is funny when you see ‘No military projects‘, whilst we are all aware that the internet itself is an invention of DARPA, who came up with it as a solution that addressed “A network of such [computers], connected to one another by wide-band communication lines [which provided] the functions of present-day libraries together with anticipated advances in information storage and retrieval and [other] symbiotic functions“, which let to ARPANET and became the Internet. So now that the cat is out of the bag, we can continue. The objection they give is fair enough. When you are an engineer who is destined to create a world where everyone communicates to one another, the last thing you want to see is “Project Maven involves using artificial intelligence to improve the precision of military drone strikes“. I am not sure if Google could achieve it, but the goal is clear and so is the objection. The BBC article show merely one side, when we go to the source itself (at https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/), in this I saw the words from Marine Corps Colonel Drew Cukor: “Cukor described an algorithm as about 75 lines of Python code “placed inside a larger software-hardware container.” He said the immediate focus is 38 classes of objects that represent the kinds of things the department needs to detect, especially in the fight against the Islamic State of Iraq and Syria“. You see, I think he has been talking to the wrong people. Perhaps you remember the project SETI screensaver. “In May 1999 the University of California launched SETI@Home. SETI stands for the” Search for Extraterrestrial Intelligence,” Originally thought that it could at best recruit only a thousand or so participants, more than a million people actually signed up on the day and in the process overwhelmed the meager desktop PC that was set aside for this project“, I remember it because I was one of them. It is in that trend that “SETI@Home was built around the idea that people with personal computers who often leave them to do something else and then just let the screensaver run are actually wasting good computing resources. This was a good thing, as these ‘idle’ moments can actually be used to process the large amount of data that SETI collects from the galaxy” (source: Manilla Times), they were right. The design was brilliant and simple and it worked better than even the SETI people thought it would, but here we now see the application, where any android (OK, IOS too) device created after 2016 is pretty much a supercomputer at rest. You see, Drew Cukor is trying to look where he needs to look, it is a ‘flaw’ he has as well as the bulk of all the military. You see, when you look for a target that is 1 in 10,000, so he needs to hit the 0.01% mark. This is his choice and that is what he needs to do, I am merely stating that by figuring out where NOT to look, I am upping his chances. If I can set the premise of illuminating 7,500 false potential in a few seconds, his job went from a 0.01% chance to 0.04%, making his work 25 times easier and optionally faster. Perhaps the change could eliminate 8,500 or even 9,000 flags. Now we are talking the chances and the time frame we need. You see, it is the memo of Bob Work that does remain an issue. I disagree with “As numerous studies have made clear, the department of defense must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors,“. The clear distinction is that those people tend to not rely on a smartphone, they rely on a simple Nokia 2100 burner phone and as such, there will be a complete absence of data, or will there be? As I see it, to tackle that, you need to be able to engage is what might be regarded as a ‘Snippet War‘, a war based on (a lot of) ‘small pieces of data or brief extracts‘. It is in one part cell tower connection patterns, it is in one part tracking IMEI (International Mobile Equipment Identity) codes and a part of sim switching. It is a jumble of patterns and normally getting anything done will be insane. Now what happens when we connect 100 supercomputers to one cell tower and mine all available tags? What happens when we can disseminate these packages and let all those supercomputers do the job? Merely 100 smart phones or even 1,000 smart phones per cell tower. At that point the war changes, because now we have an optional setting where on the spot data is offered in real time. Some might call it ‘the wet dream’ of Marine Corps Col. Drew Cukor and he was not ever aware that he was allowed to adult dream to that degree on the job, was he?

Even as these people are throwing AI around like it is Steven Spielberg’s chance to make a Kubrick movie, in the end it is a new scale and new level of machine learning, a combination of clustered flags and decentralised processing on a level that is not linked to any synchronicity. Part of this solution is not in the future, it was in the past. For that we need to read the original papers by Paul Baran in the early 60’s. I think we pushed forward to fast (a likely involuntary reaction). His concept of packet switching was not taken far enough, because the issues of then are nowhere near the issues of now. Consider raw data as a package and the transmission itself set the foundation of the data path that is to be created. So basically the package becomes the data entry point of raw data and the mobile phone processes this data on the fly, resetting the data parameters on the fly, giving instant rise to what is unlikely to be a threat and optionally what is), a setting where 90% could be parsed by the time it gets to the mining point. The interesting side is that the container for processing this could be set in the memory of most mobile phones without installing stuff as it is merely processing parsed data, not a nice, but essentially an optional solution to get a few hundred thousand mobiles to do in mere minutes what takes a day by most data centres, they merely receive the first level processed data, now it is a lot more interesting, as thousands are near a cell tower, that data keeps on being processed on the fly by supercomputers at rest all over the place.

So, we are not as Drew states ‘in an AI arms race‘, we are merely in a race to be clever on how we process data and we need to be clever on how to get these things done a lot faster. The fact that the foundation of that solution is 50 years old and still counts as an optional way in getting things done merely shows the brilliance of those who came before us. You see, that is where the military forgot the lessons of limitations. As we shun the old games like the CBM 64, and applaud the now of Ubisoft. We forget that Ubisoft shows to be graphically brilliant, having the resources of 4K camera’s, whilst those on the CBM-64 (Like Sid Meier) were actually brilliant for getting a workable interface that looked decent as they had the mere resources that were 0.000076293% of the resources that Ubisoft gets to work with me now. I am not here to attack Ubisoft, they are working with the resources available, I am addressing the utter brilliance of people like Sid Meier, David Braben, Richard Garriott, Peter Molyneux and a few others for being able to do what they did with the little they had. It is that simplicity and the added SETI@Home where we see the solutions that separates the children from the clever Machine learning programmers. It is not about “an algorithm of about 75 lines of Python code “placed inside a larger software-hardware container.”“, it is about where to set the slicer and how to do it whilst no one is able to say it is happening whilst remaining reliable in what it reports. It is not about a room or a shopping mall with 150 servers walking around the place, it is about the desktop no one notices who is able to keep tabs on those servers merely to keep the shops safe that is the part that matters. The need for brilliance is shown again in limitations when we realise why SETI@Home was designed. It opposes in directness the quote “The colonel described the technology available commercially, the state-of-the-art in computer vision, as “frankly … stunning,” thanks to work in the area by researchers and engineers at Stanford University, the University of California-Berkeley, Carnegie Mellon University and Massachusetts Institute of Technology, and a $36 billion investment last year across commercial industry“, the people at SETI had to get clever fast because they did not get access to $36 billion. How many of these players would have remained around if it was 0.36 billion, or even 0.036 billion? Not too many I reckon, the entire ‘the technology available commercially‘ would instantly fall away the moment the optional payoff remains null, void and unavailable. $36 billion investment implies that those ‘philanthropists’ are expecting a $360 billion payout at some point, call me a sceptic, but that is how I expect those people to roll.

The final ‘mistake’ that Marine Corps Col. Drew Cukor makes is one that he cannot be blamed for. He forgot that computers should again be taught to rough it out, just like the old computers did. The mistake I am referring to is not an actual mistake, it is more accurately the view, the missed perception he unintentionally has. The quote I am referring to is “Before deploying algorithms to combat zones, Cukor said, “you’ve got to have your data ready and you’ve got to prepare and you need the computational infrastructure for training.”“. He is not stating anything incorrect or illogical, he is merely wrong. You see, we need to realise the old days, the days of the mainframe. I got treated in the early 80’s to an ‘event’. You see a ‘box’ was delivered. It was the size of an A3 flatbed scanner, it had the weight of a small office safe (rather weighty that fucker was) and it looked like a print board on a metal box with a starter engine on top. It was pricey like a middle class car. It was a 100Mb Winchester Drive. Yes, 100Mb, the mere size of 4 iPhone X photographs. In those days data was super expensive, so the users and designers had to be really clever about data. This time is needed again, not because we have no storage, we have loads of it. We have to get clever again because there is too much data and we have to filter through too much of it, we need to get better fast because 5G is less than 2 years away and we will drown by that time in all that raw untested data, we need to reset our views and comprehend how the old ways of data worked and prevent Exabyte’s of junk per hour slowing us down, we need to redefine how tags can be used to set different markers, different levels of records. The old ways of hierarchical data was too cumbersome, but it was fast. The same is seen with BTree data (a really antiquated database approach), instantly passing through 50% data in every iteration. In this machine learning could be the key and the next person that comes up with that data solution would surpass the wealth of Mark Zuckerberg pretty much overnight. Data systems need to stop being ‘static’, it needs to be a fluidic and dynamic system, that evolves as data is added. Not because it is cleverer, but because of the amounts of data we need to get through is growing near exponentially per hour. It is there that we see that Google has a very good reason to be involved, not because of the song ‘Here come the drones‘, but because this level of data evolution is pushed upon nearly all and getting in the thick of things is when one remains the top dog and Google is very much about being top dog in that race, as it is servicing the ‘needs’ of billions and as such their own data centres will require loads of evolution, the old ways are getting closer and closer to becoming obsolete, Google needs to be ahead before that happens, and of course when that happens IBM will give a clear memo that they have been on top of it for years whilst trying to figure out how to best present the delays they are currently facing.
 

Leave a comment

Filed under IT, Media, Military, Science

Retaining stupidity

This is the very first thought I had when I saw “Artificial intelligence commission needed to predict impact says CBI“. Within half a second my mind went into time travel mode. Back to the late 70’s where all the unions were up in arms on computers. The computers would end labour, all those jobs lost. This is not a new subject as the magazine Elsevier showed un in 2015 with “Angst voor nieuwe technologie is zo oud als de industriële revolutie zelf. Diverse commentatoren refereerden de afgelopen tijd aan de luddieten, genoemd naar een Engelse wever die eind achttiende eeuw machines zou hebben gesaboteerd omdat die banen vernietigden“. “Fear for new technology is as old as the industrial revolution itself. Several commentaries referred to the luddites, named after an English weaver who allegedly sabotaged machines at the end of the 18th century because it destroyed jobs“. There is a partial truth here, you see, it is not about the loss of jobs. It is the mere fact that some of these Business group will soon truly show to be obsolete. In this they rely on a firm whose largest achievement is (as I personally see it) to remain silent on overstated profits whilst not having to go to court, or to jail for that matter (read: PriceWaterhouse Coopers). So by engaging this party they have already lost their case as I personally see it. So when we see “Accountancy firm PwC warned in March that more than 10 million workers may be at risk of being replaced by automation“, with the offset we needed in the past (read: Tesco) the damage might merely be a few hundred people. So I do not deny that some jobs will go, yet like the automation sequence that computers brought from the 80’s onwards. That same industry would give jobs and infrastructure to thousands, livening up an industry we could not consider at that time. The same happened in the 18th century when the looms and weavers grew, the blossoming of a textile industry on a global setting. So when you see “The business lobby group said almost half of firms were planning to devote resources to AI, while one in five had already invested in the technology in the past year“, you are looking at what I would call a flim flam statement. You see, perhaps the more accurate statements might be: “The business lobby group stated that 50% of the firms are moving away from the facilitation that the business groups provides for“, so these firms are pushing in another direction, why give credence to their flawed way of thinking? You see, this is the consequence of the greed driven executives who rely on status quo, they ran out of time and they need extra time to get their upgraded pensions in play. Why should we allow for them to continue at all?

I am willing to give the TUC a small consideration because of their heritage. Yet, when we see in the Financial Times (September 11th) “Frances O’Grady, the general secretary of the Trades Union Congress, said the government was hurtling towards a “kamikaze Brexit” and should keep open the option of remaining in the single market” (at https://www.ft.com/content/c5f7afb8-9641-11e7-b83c-9588e51488a0), yet there is overwhelming presented evidence from all sides both positive and negative mind you that the single market only benefits the large corporations, the small companies are merely disadvantaged by the single market as such we must wonder where the loyalty lies of the TUC, by that notion if the TUC is there for large corporations, or to serve them first, we see another piece of evidence that shows the TUC to be redundant, and as they merely vie for the large corporations as their main priority, the fear of those companies would become the fear of the TUC and as such, they are becoming equally obsolete. The Trades Union Congress (TUC) is a national trade union centre should show clear cause with all the data, not merely the aggregated data results of a data scientist at PwC. So when I see “the CBI is urging Theresa May to launch the commission from early 2018. It said companies and trade unions should be involved and the commission should help to set out ways to increase productivity and economic growth as well looking into the impact of AI.” Who is going to pay for all that? I submit that the Trade Unions pay their own way and ask their members for the needed funds. What are the chances of that? The poisoners part is seen in ‘set out ways to increase productivity and economic growth‘. You see, AI will do that to some extent on several paths, yet it is not up to the government to figure that out or to set debilitating fences there. It is up to the business sector to figure out where that profit is. That is why they are in business! You see, as I see it, the drive to remain in some level of Status Quo was nice until it ended, these companies have driven away the people who wanted to innovate and now they are in start-ups, or in companies that embraced innovation, the older larger players are now without skills to a larger extent, without drive through misdirected use of funds and lacking ambition, so they are going to get hit in all three ways when the driver comes. 5G will be a first and when it does happen AI (it is still years away from being anything truly practical), these two paths will drive new methods of automation and data gathering. But the larger players wanted to milk their 4G base as much as possible, setting up side channels with smaller players like Orange, DODO, TPG, Tesco and giffgaff. Now that they are learning that 5G will be a larger wave then some academics presented (likely at the expense of some placement), now we see the panic wave that follows. Now we see the need for commissions to slow things down so that the milkers can catch up. In my view there are clear reasons that such paths should be allowed to exist.

That is my supported view, it has been supported by other articles and I have written about these events for close to two years now. Now that the party is over, we see players trying to change the game so that they can continue just a little longer. We allowed for these matters in 2004 and 2008, it is time for the governments to give a clear signal that change will come and stopping it should not be allowed, not until they alter the tax laws, the laws on accountability and the powers of prosecution to have a better grasp at these players, a change that must happen before we allow any level of catering to their needs.

By the way, when we consider ‘PwC placed under investigation following BT accountancy scandal‘ (at http://www.independent.co.uk/news/business/news/pwc-investigation-bt-accountancy-scandal-italian-operations-pricewaterhousecoopers-a7813726.html), as well as the Fortune.com issue (at http://fortune.com/2017/02/28/pricewaterhousecoopers-pwc-scandals-oscars/), where we see the five larger issues at PwC, which includes the previous mentioned Tesco, but now has an added Tyco, Taylor Bean & Whitaker, Bank of Tokyo – Mitsubishi and MF Global. So as I have been on the prosecuting tank, ready to roll it over the board of directors of PwC regarding Tesco, having any faith in whatever they want to report on now, unless it comes with all the data for the public at large to scrutinise, they should not get close to any commission and even less be part of the reporting. Now we can irresponsibly use 5 bad apples to identify someone who ships containers of fruit and that would be a valid response and defence. Yet overall the players asking for the commission seem to have their own needs first in all this. There would have been a consideration if there was any given that Google or the Alphabet group to be part in all this, yet that mention is missing and therefor the setting is void. Now, there are more players in the AI field, but it seems that the Google headway is the strongest, the largest and at present the fastest. And with a sense of humour I will add that you merely have to ‘Bing‘ the search ‘AI Commission‘ to see that Microsoft is in no danger of getting anywhere near an AI this upcoming decade. Perhaps the mention of ‘Australian Securities and Investments Commission – Official Site‘ on position 2 and ‘Fair Work Commission | Australia’s national workplace …‘ in position 5 to realise that their AI could be sunk in 13 keystrokes. The power of assumption will kill anything, including ones sense of humour and that same persons appetite.

Yet is there more?

Yes, there most certainly is. You see with “Investment in technology could help bolster Britain’s sputtering record on labour productivity, which is among the worst in the G7 and is failing to improve in line with expectations since the financial crisis” we see part of the fear being spread. The ‘milkers’ as I prefer to call some of them are realising that having space and capital for growth was essential to remain in the game. Some of the milkers are ending up being too visible and plenty of consumers are moving to a place where they can get a better deal. That was seen in Australia in June as ABC news gave the bad news that Telstra had to shed 1400 jobs. We see all kinds of excuses, yet the reality was that for well over 5 years they were too expensive, not by a margin, but by being up to 300% more expensive than a decent alternative. I have had personal experience whilst in a Telstra Shop because I was not an optional business account he had no time for me. Do you think that a company like that can remain in existence? Over the last 3 years, the shares dropped from $6.61 to $3.52, that is pain that a company feels and they remains ignorant and blind to the consequences. That view is enhanced even further by the statements given in the Sydney Morning Herald. With “Our approach [to 5G] is to get in earlier and try to have it modified so it’s more suitable to Australia when it arrives, rather than us have to try to modify it when it gets here,” Mr Wright told BusinessDay.“, so basically there is every chance that Australian 5G will be undercut by some level of standard that is not as given in the 5G handbook. As I personally see it is Telstra’s approach to setting a standard that is no standard at all. A ‘get in first so that we can tell others what the standard is‘, or better stated, what the standard is that you are not adhering to; 3.5G for your mobile anyone?

This Australian view translates to the UK as well. With “Despite the potential for technology to increase productivity, firms are cautious about investing owing to uncertainty over Brexit. Growth in business investment was flat in the three months to June, the latest official figures show“, so these business types are not willing to invest, they merely want the one market side to go on and in light of the delays needed, they want a commission, so that they can force government investment and delays. So they can get the best out of both worlds. The (as I personally see it) exploitative model is continued in every venue we see come and as I see it, it will be much better for us if those business models and business players go, they should go now before they become the detrimental force on UK industries. 5G will be a new beacon of industry and progress, it will open up additional venues for many telecom players and as such we are all better to get on board now and think of that one idea we had that could work for us all. It equally holds the solutions the NHS desperately needs and the fact that 3 larger players still haven’t seen that light is a larger worry than anything else. It merely shows them to be obsolete, dinosaurs in a modern age. As one person told me, the reason the T-Rex is such an angry creature is because its arms are too short to take a selfie. That does make sense, especially when you consider what some of these players think when they think 5G, they merely look at speed, whilst 5G opens up so much more than merely a quick download of a movie, in all this AI could be breaking the moulds and give us something that even I cannot envision, which is actually a really good thing. You see, the new waves will come from people that are different from me; they are the dreamers like the game designers in the early 80’s. They will show vision and give us something we never considered before. That is true progress and the people who bring us weighted predictions and tell us of fear of 20% of all jobs lost need to do what they were meant to do, die and become extinct just like the dinosaurs before them and soon thereafter I will become extinct too. That is the nature of future evolution. Just like my grandfather who could not comprehend the electronic calculator. I am clever enough to comprehend quantum computing, yet I hope I cannot comprehend what comes after, because if I can remain on board at that point we have all become technologically stagnant and we merely move backwards, that too is a personal view I have.

 

Leave a comment

Filed under Finance, IT, Media, Politics, Science

A legislative system shock

Today the Guardian brings us the news regarding the new legislation on personal data. The interesting starts with the image of Google and not Microsoft, which is a first item in all this. I will get back to this. The info we get with ‘New legislation will give people right to force online traders and social media to delete personal data and will comply with EU data protection‘ is actually something of a joke, but I will get back to that too. You see, the quote it is the caption with the image that should have been at the top of all this. With “New legislation will be even tougher than the ‘right to be forgotten’ allowing people to ask search engines to take down links to news items about their lives“, we get to ask the question who the protection is actually for?

the newspapers gives us this: “However, the measures appear to have been toughened since then, as the legislation will give people the right to have all their personal data deleted by companies, not just social media content relating to the time before they turned 18“, yet the reality is that this merely enables new facilitation for data providers to have a backup in a third party sense of data. As I personally see it, the people in all this will merely be chasing a phantom wave.

We see the self-assured Matt Hancock standing there in the image and in all this; I see no reason to claim that these laws will be the most robust set of data laws at all. They might be more pronounced, yet in all this, I question how facilitation is dealt with. With “Elizabeth Denham, the information commissioner, said data handlers would be made more accountable for the data “with the priority on personal privacy rights” under the new laws“, you see the viewer will always respond in the aftermath, meaning that the data is already created.

We can laugh at the statement “The definition of “personal data” will also be expanded to include IP addresses, internet cookies and DNA, while there will also be new criminal offences to stop companies intentionally or recklessly allowing people to be identified from anonymous personal data“, it is laughable because it merely opens up venues for data farms in the US and Asia, whilst diminishing the value of UK and European data farms. The mention of ‘include IP addresses‘ is funny as the bulk of the people on the internet are all on dynamic IP addresses. It is a protection for large corporations that are on static addresses. the mention of ‘stop companies intentionally or recklessly allowing people to be identified from anonymous personal data‘ is an issue as intent must be shown and proven, recklessly is something that needs to be proven as well and not on the balance of it, but beyond all reasonable doubt, so good luck with that idea!

As I read “The main aim of the legislation will be to ensure that data can continue to flow freely between the UK and EU countries after Brexit, when Britain will be classed as a third-party country. Under the EU’s data protection framework, personal data can only be transferred to a third country where an adequate level of protection is guaranteed“, is this another twist in anti-Brexit? You see none of this shows a clear ‘adequate level of protection‘, which tends to stem from technology, not from legislation, the fact that all this legislation is all about ‘after the event‘ gives rise to all this. So as I see it, the gem is at the end, when we see “the EU committee of the House of Lords has warned that there will need to be transitional arrangements covering personal information to secure uninterrupted flows of data“, it makes me wonder what those ‘actual transitional arrangements‘ are and how come that the new legislation is covering policy on this.

You see, to dig a little deeper we need to look at Nielsen. There was an article last year (at http://www.nielsen.com/au/en/insights/news/2016/uncommon-sense-the-big-data-warehouse.html), here we see: “just as it reached maturity, the enterprise data warehouse died, laid low by a combination of big data and the cloud“, you might not realise this, but it is actually a little more important than most realise. It is partially seen in the statement “Enterprise decision-making is increasingly reliant on data from outside the enterprise: both from traditional partners and “born in the cloud” companies, such as Twitter and Facebook, as well as brokers of cloud-hosted utility datasets, such as weather and econometrics. Meanwhile, businesses are migrating their own internal systems and data to cloud services“.

You see, the actual dangers in all that personal data, is not the ‘privacy’ part, it is the utilities in our daily lives that are under attack. Insurances, health protection, they are all set to premiums and econometrics. These data farms are all about finding the right margins and the more they know, the less you get to work with and they (read: their data) will happily move to where ever the cloud takes them. In all this, the strong legislation merely transports data. You see the cloud has transformed data in one other way, the part Cisco could not cover. The cloud has the ability to move and work with ‘data in motion’; a concept that legislation has no way of coping with. The power (read: 8 figure value of a data utility) is about being able to do that and the parties needing that data and personalised are willing to pay through the nose for it, it is the holy grail of any secure cloud environment. I was actually relieved that it was not merely me looking at that part; another blog (at https://digitalguardian.com/blog/data-protection-data-in-transit-vs-data-at-rest) gives us the story from Nate Lord. He gives us a few definitions that are really nice to read, yet the part that he did not touch on to the degree I hoped for is that the new grail, the analyses of data in transit (read: in motion) is cutting edge application, it is what the pentagon wants, it is what the industry wants and it is what the facilitators want. It is a different approach to real time analyses, and with analyses in transit those people get an edge, an edge we all want.

Let’s give you another clear example that shows the value (and the futility of legislation). Traders get profit by being the first, which is the start of real wealth. So whoever has the fastest connection is the one getting the cream of the trade, which is why trade houses pay millions upon millions to get the best of the best. The difference between 5ms and 3ms results in billions of profit. Everyone in that industry knows that. So every firm has a Bloomberg terminal (at $27,000 per terminal), now consider the option that they could get you that data a millisecond faster and the automated scripts could therefor beat the wave of sales, giving them a much better price, how much are they willing to pay suddenly? This is a different level of armistice, it is weaponised data. The issue is not merely the speed; it is the cutting edge of being able to do it at all.

So how does this relate?

I am taking you back to the quote “it would amount to a “right to be forgotten” by companies, which will no longer be able to get limitless use of people’s data simply through default “tick boxes” online” as well as “the legislation will give people the right to have all their personal data deleted by companies“. The issue here is not to be forgotten, or to be deleted. It is about the data not being stored and data in motion is not stored, which now shows the futility of the legislation to some extent. You might think that this is BS, consider the quote by IBM (at https://www.ibm.com/developerworks/community/blogs/5things/entry/5_things_to_know_about_big_data_in_motion?lang=en), it comes from 2013, IBM was already looking at matters in different areas close to 5 years ago, as were all the large players like Google and Microsoft. With: “data in motion is the process of analysing data on the fly without storing it. Some big data sources feed data unceasingly in real time. Systems to analyse this data include IBM Streams “, here we get part of it. Now consider: “IBM Streams is installed on nearly every continent in the world. Here are just a few of the locations of IBM Streams, and more are being added each year“. In 2010 there were 90 streams on 6 continents, and IBM stream is not the only solution. As you read that IBM article, you also read that Real-time Analytic Processing (RTAP) is a real thing, it already was then and the legislation that we now read about does not take care of this form of data processing, what the legislation does in my view is not give you any protection, it merely limits the players in the field. It only lets the really big boys play with your details. So when you see the reference to the Bloomberg terminal, do you actually think that you are not part in the data, or ever forgotten? EVERY large newspaper and news outlet would be willing to pay well over $127,000 a year to get that data on their monitors. Let’s call them Reuter Analytic Systems (read: my speculated name for it), which gets them a true representation of all personalised analytical and reportable data in motion. So when they type the name they need, they will get every detail. In this, the events that were given 3 weeks ago with the ITPRO side (at http://www.itpro.co.uk/strategy/29082/ecj-may-extend-right-to-be-forgotten-ruling-outside-the-eu) sounds nice, yet the quote “Now, as reported by the Guardian, the ECJ will be asked to be more specific with its initial ruling and state whether sites have to delete links only in the country that requests it, or whether it’s in the EU or globally” sounds like it is the real deal, yet this is about data in rest, the links are all at rest, so the data itself will remain and as soon as HTML6 comes we might see the beginning of the change. There have been requests on that with “This is the single-page app web design pattern. Everyone’s into it because the responsiveness is so much better than loading a full page – 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. My goal would be a high-speed responsive web experience without having to load JavaScript“, as well as “having the browser internally load the data into a new data structure, and the browser then replaces DOM elements with whatever data that was loaded as needed“, it is not mere speed, it would allow for dynamic data (data in motion) to be shown. So when I read ‘UK citizens to get more rights over personal data under new laws‘, I just laughed. The article is 15 hours old and I considered instantly the issues I shown you today. I will have to wait until the legislation is released, yet I am willing to bet a quality bottle of XO Cognac that data in motion is not part of this, better stated, it will be about stored data. All this whilst the new data norm is still shifting and with G5 mobile technologies, stored data might actually phase out to be a much smaller dimension of data. The larger players knew this and have been preparing for this for several years now. This is also an initial new need for the AI that Google wants desperately, because such a system could ascertain and give weight to all data in motion, something IBM is currently not able to do to the extent they need to.

The system is about to get shocked into a largely new format, that has always been the case with evolution. It is just that actual data evolution is a rare thing. It merely shows to me how much legislation is behind on all this, perhaps I will be proven wrong after the summer recess. It would be a really interesting surprise if that were the case, but I doubt that will happen. You can see (read about that) for yourself after the recess.

I will follow up on this, whether I was right or wrong!

I’ll let you speculate which of the two I am, as history has proven me right on technology matters every single time (a small final statement to boost my own ego).

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science