Tag Archives: AI

Lying through Hypes

I was thinking on a Huawei claim that I saw (in the image), the headline ‘AI’s growing influence on the economy’ sounds nice, yet AI does not exist at present,not True AI, or perhaps better stated Real AI. At the very least two elements of AI are missing so that whatever it is, it is not AI. is that an indication on just how bad the economy is? Well, that is up for debate, but what is more adamant is what the industry is proclaiming is AI and cashing in on something that is not AI at all.

Yet when we look at the media, we are almost literally thrown to death with AI statements. So what is going on? Am I wrong?

No! 

Or at least that is my take on the matter, I believe that we are getting close to near AI, but what the hype and what marketing proclaim is AI, is not AI. You see, if there was real AI we would not see articles like ‘This AI is a perpetual loser at Othello, and players love it‘, we are handed “The free game, aptly called “The weakest AI Othello,” was released four months ago and has faced off against more than 400,000 humans, racking up a paltry 4,000 wins and staggering 1.29 million losses as of late November” this is weird, as we look at SAS (a data firm) we see: “Artificial intelligence (AI) makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks“, which is an actual part of an actual AI, so why do we see the earlier mentioned 400,000 players with 1.29 million wins whilst the system merely won 4,000 times shows that it is not learning, as such is cannot be an AI. A slightly altered SAS statement would be “Most AI examples rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to accomplish specific tasks by processing large amounts of data and recognizing patterns in the data” The SAS page (at https://www.sas.com/en_au/insights/analytics/what-is-artificial-intelligence.html) also gives us the image where they state that today AI is seen as ‘Deep Learning’, which is not the same.

It is fraught with a dangerous situation, the so called AI is depending on human programming and cannot really learn, merely adapt to programming. SAS itself actually acknowledges this with the statement “Quick, watch this video to understand the relationship between AI and machine learning. You’ll see how these two technologies work, with examples” they are optionally two sides of a coin, but not the same coin, if that makes sense, so in that view the statement of Huawei makes no sense at all, how can an option influence an economy when it does not exist? Well, we could hide behind the lack of growth because it does not exist. Yet that is also the stage that planes are finding themselves in as they are not equipped with advanced fusion drives, it comes down to the same problem (one element is most likely on Jupiter and the other one is not in our solar system). When we realise that we can seek advanced fusion as much as we want, but the elements requiring that are not in our grasp, just like AI, it is shy a few elements so whatever we call AI is merely something that is not really AI. It is cheap marketing for a generation that did not look beyond the term. 

The Verge (a https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science) had a nice summary, I particularly liked (slightly altered) “the Oral-B’s Genius X toothbrush that touted supposed “AI” abilities. But dig past the top line of the press release, and all this means is that it gives pretty simple feedback about whether you’re brushing your teeth for the right amount of time and in the right places. There are some clever sensors involved to work out where in your mouth the brush is, but calling it artificial intelligence is gibberish, nothing more“, we can see this as the misuse of the term AI, and we are handed thousands of terms every day that misuse AI, most of it via short messages on Social Media. and a few lines later we see the Verge giving us “It’s better, then, to talk about “machine learning” rather than AI” and it is followed by perhaps one of the most brilliant statements “Machine learning systems can’t explain their thinking“, it is perhaps the clearest night versus day issue that any AI system would face and all these AI systems that are dependable growing any economy aren’t and the world (more likely the greed driven entities) cannot grow any direction in this. they are all hindered what marketing states it needs to be whilst marketing is clueless on what they face, or perhaps they are hoping that the people remain clueless on what they present.

So as the verge ends with “In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined” we see the nucleus of the matter, we are not asking questions and we are all accepting what the media and its connected marketing outlets are giving us, and when we make the noticeable jump that there is no AI and it is merely Machine learning and deeper learning, whilst we entertain the Verge examples “How clever is a book?” and “What expertise is encoded in a frying pan?

We need to think things through (the current proclaimed AI systems certainly won’t). We are back in the 90’s where concept sellers are trying to fill their pockets all whilst we all perfectly well know (through applied common sense) that what they are selling is a concept and no concept will fuel an economy that is a truth that came and stood up when a certain Barnum had its circus and hid behind well chosen marketing. So whenever you get some implementation of AI on LinkedIn of Facebook you are being lied to (basically you are marketed) or pushed into some direction that such articles attempt to push you in. 

That is merely my view on the matter and you are very welcome to get your own view on the matter as well, I merely hope that you will look at the right academic papers to show you what is real and what is the figment of someone’s imagination. 

 

Leave a comment

Filed under IT, Media, Science

Tethered to the bottom of the ocean

Perhaps you remember a 1997 movie, about a ship that decided to take a fast trip to America, the HMS Titanic. We all have our moments and what you might not know is that there is a deleted scene that only a few limited editions had. The captain (played by Bernard Hill) was asked a question by one of the passengers: ‘Is land far away?‘ The response was: ‘No, it is only 3900 yards to the nearest land………straight down‘. OK, that did not really happen, but it does sound funny. You see, the image of a place can be anything we need it to be, dimensionality is everything and that is where we see the larger problem.

This is actually directly linked to the article I wrote on September 18th, the article ‘The Lie of AI‘ gets another chapter, one that I actually saw coming, the factors at least, but not to the degree the Guardian exposes. In the article (at https://lawlordtobe.com/2019/09/18/the-lie-of-ai/) I gave you: “more importantly it will be performing for the wrong reasons on wrong data making the learning process faulty and flawed to a larger degree“, now we see (at https://www.theguardian.com/society/2019/sep/19/thousands-of-reports-inaccurately-recorded-by-police) a mere 8 hours ago ‘Thousands of rape reports inaccurately recorded by police‘, so we are not talking about a few wrong reports, because that will always happen, no we are talking about THOUSANDS of reports that lack almost every level of accuracy. When we consider the hornets’ nest the Guardian gives us with: “Thousands of reports of rape allegations have been inaccurately recorded by the police over the past three years and in some cases never appeared in official figures” Sajid Javid is now facing more than a tough crowd, there is now the implied level of stupid regarding technology pushes whilst the foundations of what is required cannot be met and yes, I know that he is the Chancellor of the Exchequer. It is not that simple, the simplicity is not seen in the quote: “More than one in 10 audited rape reports were found to be incorrect“, the underlying data is therefore more than unreliable; it basically has become useless. this is a larger IT problem, it is not merely that the police cannot do its job, anything linked to this was wrongfully examined, optionally innocent people were investigated (which is not the worst part), the worst part is that the police force has a resource issue and there is now the consideration that the lack of resources have also been going in the wrong direction. The failing becomes a larger issue when we see: “The data also found that a number of forces failed to improve in subsequent inspections, with some getting worse“, the failing pushed on from operational to systemic. Now consider IT, the laughingly hilarious step of AI, even the upgrades to existing systems that cannot be met in any way because the data is flawed on several levels. It is a larger issue that out of the national police force in this regard only Cumbria, Sussex and Staffordshire past the bar, a mere 3 out of 36 forces did their job (above a certain level) and it gets worse when you consider that this is merely the investigations into the sexual assault section, the matter could actually be a lot worse. Consider the Guardian article in July ‘Police trials of facial recognition backed by home secretary‘ (at https://www.theguardian.com/uk-news/2019/jul/12/police-trials-facial-recognition-home-secretary-sajid-javid-technology-human-rights), as well as ‘UK police use of facial recognition technology a failure, says report‘ from May 2018 (at https://www.theguardian.com/uk-news/2018/may/15/uk-police-use-of-facial-recognition-technology-failure), you might not have made the link, but I certainly did. When you take the quote: “Police attempts to use cameras linked to databases to recognise people from their face are failing, with the wrong person picked out nine times out 10, a report claims“, now consider that a  victim reported the assault on her, a report is made and at some point the evidence is regarded and looked over, the information is linked to CCTV data and now we are off to the races, whilst 3 out of 36 forces did it right, there is now a stage where 91% is looking at the wrong information, inaccurate information and add to that the danger of 10% getting properly identified, even if the right person was picked out, there is still a well over 75% chance that the investigation is going in the wrong direction and optionally an innocent person gets investigated and screened, in the meantime the criminal is safe to do what he wanted all along.

Now we get the good stuff, in 2018 home secretary, Sajid Javid gave his approval and now as he is the Chancellor of the Exchequer, he approves the invoice and also sets the stage of handing out £30 million to a system that cannot function in a system that is based on cogs that were not accurate and are transposing the wrong data. Even then we see “the BBC reported that Javid supported the trials at the launch of computer technology aimed at helping police fight online child abuse“, a system this inaccurate, not merely because of its flawed technology is set in a stage where the offered data is not accurate either, this simply implies that until the systemic failure is fixed the new system can never function and it will take well over a year to fix the systemic failure. So tell me, what do you normally do to a person who is knowingly and willingly handing over £30 million to a plan that has no chance of success?

We need to stop politicians from wasting this level of resources and funds merely to look good in the eyes of big business. I also feel that it is appropriate that Sajid Javid will be held personally accountable for spending funds that would never be deployed correctly.

The reasoning here is seen in the quote “Recorded rape has more than doubled since 2013-14 to 58,657 cases in 2018-19. However, police are referring fewer cases for prosecution and the CPS is charging, prosecuting and winning fewer cases. The number of cases resulting in a conviction is lower than it was more than a decade ago“, the stage is twofold, we see a doubling over 5 years whilst convictions were down from more than a decade ago, it will in the end link to conviction rate on data, whilst the data numbers are not reliable. The quotes “the case was not recorded as a crime“, as well as “noting it as an incident“, in both cases rape registered as something else, and there is no conviction required on ‘incident‘, the underlying questions is whether this lack is optionally intentional to skew that statistics. You might not agree and it might not be true, but when we see a 91% failing from the police force there is something really wrong. The problem intensifies when we see the Guardian statement that “West Midlands was found to be ‘of concern’ and had ‘not improved’ rape recording upon re-inspection in 2018” this implies that the work of the Inspectorate of Constabulary and Fire and Rescue Services (HMICFRS) is either not taken seriously or is intentionally ignored, you tell me which of the two it is and connected to this is Sajid Javid ready to ‘upgrade’ to AI (that remains funny) and spend over £30 million on that system, as well as the funds wasted on the current CCTV facial recognition solution, which is not cheap either.

I wonder who the CCTV will point to arrest for the person allegedly having sex on the desk of the Terry Walker, Lord Mayor of North East Lincolnshire. Images show that the local police might be seeing Noel Gallagher as a person of interest at present.

I wonder how that data was acquired?

In opposition

There is however the other side and even a I did not give it the illumination, there was no intent to ignore it. The options to ‘AI to reduce the burden on child abuse Investigators‘ is not to be ignored, it must be the task that will burn out a person a lot faster than they would transporting bottles of nitro-glycerin by hand through a busy marketplace. I am not insensitive to this, yet the Police Professional gives us: “The development will cost £1.76 million from a total investment in the CAID from the Home Office of £8.2 million this year, which is different from the £30 million given, as I see it additional questions come to the foreground now. Yet there are other issues that are not part of this. There is the danger of misreading (and incorrectly acting on) seeded data. In SIGINT we see the part where data fields are used to misrepresent information (like Camera model, owner, serial number), when we start looking in the wrong direction, even if some of the data might be correct you are in a different -phase and the problem is that no AI can tell you that a camera serial number might be wrong, or right. There are larger data concerns, yet I do understand that some tasks can alleviate stress from the police, yet when we link this to the lack of accuracy on police data, the task remains equal to mopping the floor whilst the tap is running spilling water on the floor. None of these steps make sense until the operational procedures are cleared, tested and upgraded. A failing rate of 91% (33 out of 36) makes that an absolute given.

And for those who missed the Gallagher joke, please feel free to watch the movie the Grimsby brothers. There are actually two additional paths that are an issue, it is not about presentation, it is about the interpretation, as well as the insight of sliced data, they interact and as such a lot of metrics will go wrong and remain incorrect and inaccurate for some time to come. Data will get interpreted and optionally acted on, which becomes a non-option when accuracy is below a certain value. So feel free to be anchored to the ground in the approach to data surveillance employing AI (I am still laughing about that part), yet when you are tethered to the bottom of the ocean, how will you get a moment to catch your breath?

Precisely, you won’t!

 

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The Lie of AI

The UK home office has just announced plans to protect paedophiles for well over a decade and they are paying millions to make it happen. Are you offended yet? You should be. The article (at https://www.theguardian.com/technology/2019/sep/17/home-office-artificial-intelligence-ai-dark-web-child-sexual-exploitation) is giving you that, yet you do not realise that they are doing that. The first part is ‘Money will go towards testing tools including voice analysis on child abuse image database‘, the second part is “Artificial intelligence could be used to help catch paedophiles operating on the dark web, the Home Office has announced” these two are the guiding part in this, and you did not even know it. To be able to understand this there are two parts. The first is an excellent article in the Verge (at https://www.theverge.com/2019/1/28/18197520/ai-artificial-intelligence-machine-learning-computational-science), the second part is: ‘AI does not exist!

Important fact is that AI will become a reality at some point, in perhaps a decade, yet the two elements making AI essential have not been completed. The first is quantum computing, IBM is working on it, and they admit: “For problems above a certain size and complexity, we don’t have enough computational power on Earth to tackle them.” This is true enough and fair enough. They also give us: “it was only a few decades ago that quantum computing was a purely theoretical subject“. Two years ago (yes only two years ago) IBM gives us a new state, a new stage in quantum computing where we see a “necessary brick in the foundation of quantum computing. The formula stands apart because unlike Shor’s algorithm, it proves that a quantum computer can always solve certain problems in a fixed number of steps, no matter the increased input. While on a classical computer, these same problems would require an increased number of steps as the input increases” This is the first true step towards creating AI, as what you think is AI grows, the data alone creates an increased number of steps down the line, coherency and comprehension become floating and flexible terms, whilst comprehension is not flexible, comprehension is a set stage, without ‘Quantum Advantage with Shallow Circuits‘ it basically cannot exist. In addition, this year we get the IBM Q System One, the world’s first integrated quantum computing system for commercial use, we could state this is the first true innovative computer acceleration in decades and it has arrived in a first version, yet there is something missing and we get to stage two later.

Now we get to the Verge.

The State of AI in 2019‘ published in January this year gives us the goods, and it is an amazing article to read. The first truth is “the phrase “artificial intelligence” is unquestionably, undoubtedly misused, the technology is doing more than ever — for both good and bad“, the media is all about hype and the added stupidity given to us by politicians connected the worst of both worlds, they are clueless and they are trying being dumb and clueless on the worst group of people, the paedophiles and they are paying millions to do what is cannot accomplish at present.

Consider a computer or a terminator super smart, like in the movies and consider “a sci-vision of a conscious computer many times smarter than a human. Experts refer to this specific instance of AI as artificial general intelligence, and if we do ever create something like this, it’ll likely to be a long way in the future” and that is the direct situation, yet there is more.

The quote “Talk about “machine learning” rather than AI. This is a subfield of artificial intelligence, and one that encompasses pretty much all the methods having the biggest impact on the world right now (including what’s called deep learning)” is very much at the core of it all, and it exists and it is valid and it is the point of set happening, yet without quantum computing we are confronted with the earlier stage ‘on a classical computer, these same problems would require an increased number of steps as the input increases‘, so now all that data delays and delays and stops progress, this is the stage that is a direct issue, then we also need to consider “you want to create a program that can recognize cats. You could try and do this the old-fashioned way by programming in explicit rules like “cats have pointy ears” and “cats are furry.” But what would the program do when you show it a picture of a tiger? Programming in every rule needed would be time-consuming, and you’d have to define all sorts of difficult concepts along the way, like “furriness” and “pointiness.” Better to let the machine teach itself. So you give it a huge collection of cat photos, and it looks through those to find its own patterns in what it sees” This learning stage takes time, yet down the track it becomes awfully decent in recognising what a cat is and what is not a cat. That takes time, yet the difference is that we are seeking paedophiles, so that same algorithm is used not to find a cat, but to find a very specific cat. Yet we cannot tell it the colour of its pelt (because we do not know), we cannot tell the size, shape or age of that specific cat. Now you see the direct impact of how delusional the idea form the Home Office is. Indirectly we also get the larger flaw. Learning for computers comes in a direct version and an indirect version and we can both put it in the same book: Programming for Dummies! You see, we feed the computer facts, but as it is unable to distinguish true facts from false facts we see a larger failing, the computer might start to look in the wrong direction, pointing out the wrong cat, making the police chase and grab the wrong cat and when that happens, the real paedophile had already hidden itself again. Deep Learning can raise flags all over the place and it will do a lot of good, but in the end, a system like that will be horribly expensive and paying 100 police officers for 20 years to hunt paedophiles might cost the same and will yield better results.

All that is contained in the quote: “Machine learning systems can’t explain their thinking, and that means your algorithm could be performing well for the wrong reasons” more importantly it will be performing for the wrong reasons on wrong data making the learning process faulty and flawed to a larger degree.

The article ends with “In the here and now, artificial intelligence — machine learning — is still something new that often goes unexplained or under-examined” which is true and more important, it is not AI, the fact that we were not really informed about, there is not AI at present, not for some time to come and it makes us wonder on the Guardian headline ‘Home Office to fund use of AI to help catch dark web paedophiles‘, how much funds and the term ‘use of AI‘ requires it to exist, which it does not.

The second missing item.

You think that I was kidding, but I was not, even as the Quantum phase is seemingly here, its upgrade does not exist yet and that is where true AI becomes an optional futuristic reality. This stage is called the Majorana particle, it is a particle that is both matter and antimatter (the ability to be both positive and negative), and one of the leading scientists in this field is Dutch Physicist Leo Kouwenhoven. Once his particle becomes a reality in quantum computing, we get a new stage of shallow circuits, we get a stage where fake news, real news, positives and false positives are treated in the same breath and the AI can distinguish between them. That stage is decades away. At that point the paedophile can create whatever paper trail he likes; the AI will be worse than the most ferocious bloodhound imaginable and will see the fake trails faster than a paedophile can create it. It will merely get the little pervert caught faster.

The problem is that this is decades away, so someone should really get some clarification from the Home Office on how AI will help, because there is no way that it will actually do so before the government budget of 2030. What will we do in the meantime and what funds were spend to get nothing done? When we see: “pledged to spend more money on the child abuse image database, which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly, against a record of 14m images, to help identify victims“, in this we also get “used to trial aspects of AI including voice analysis and age estimation to see whether they would help track down child abusers“, so when we see ‘whether they would help‘, we see a shallow case, so shallow that the article in the Verge well over half a year ago should indicate that this is all water down the drain. And the amount (according to Sajid Javid) is set to “£30m would be set aside to tackle online child sexual exploitation“, I am all for the goal and the funds. Yet when we realise that AI is not getting us anywhere and Deep Learning only gets us so far, and we also now consider “trial aspects of AI including voice analysis and age estimation” we see a much larger failing. How can voice analyses help and how is this automated? and as for the term ‘trial aspects of AI‘, something that does not exist, I wonder who did the critical read on a paper allowing for £30 million to be spend on a stage that is not relevant. How about getting 150 detectives for 5 years to hunt down these bastards might be cheaper and in the end a lot more results driven.

In the end of the article we see the larger danger that is not part of AI, when we see: “A paper by the security think-tank Rusi, which focused on predictive crime mapping and individual risk assessment, found algorithms that are trained on police data may replicate – and in some cases amplify – the existing biases inherent in the dataset“, in this Rusi is right, it is about data and the data cannot be staged or set against anything, which makes for a flaw in deep learning as well. We can teach what a cat is by showing it 1,000 images, yet how are the false images recognised (panther, leopard, or possum)? That stage seems simple in cats, in criminals it is another matter, comprehension and looking past data (showing insight and wisdom) is a far stretch for AI (when it is there) and machine learning and deeper learning are not ready to this degree at present. We are nowhere near ready and the first commercial quantum computer was only released this year. I reckon that whenever a politician uses AI as a term, he is either stupid, uninformed or he wants you to look somewhere else (avoiding actual real issues).

For now the hypes we see are more often than not the lie of AI, something that will come, but unlikely to be seen before the PS7 is setting new sales records, which is still many years away.

 

1 Comment

Filed under Finance, IT, Media, Politics, Science

Fight the Future

Mark Bergen gives us a Bloomberg article. The Sydney Morning Herald took it on (at https://www.smh.com.au/business/companies/inside-huawei-s-secret-hq-china-is-shaping-the-future-20181213-p50m0o.html). Of course the arrest of Meng Wanzhou, chief financial officer of Huawei Technologies is the introduction here. We then get the staging of: “inside Huawei’s Shenzhen headquarters, a secretive group of engineers toil away heedless to such risks. They are working on what’s next – a raft of artificial intelligence, cloud-computing and chip technology crucial to China’s national priorities and Huawei’s future” with a much larger emphasis on “China’s government has pushed to create an industry that is less dependent on cutting-edge US semiconductors and software“, the matters are not wrong, yet they are debatable. When I see ‘China’s national priorities‘ and ‘Huawei’s future‘ we must ask ourselves, are they the same? They might be on the same course and trajectory, but they are not the same. In the end Huawei needs to show commercial power and growth, adhering to China’s national needs are not completely in line with that, merely largely so.

Then we something that is a lot more debatable, when we get: “That means the business would lap $US100 billion in 2025, the year China’s government has set to reach independence in technological production” and by my reckoning, China could optionally reach that in 2021-2022, these three years are important, more important than you realise. Neom in Saudi Arabia, optionally three projects in London, two in Paris, two in Amsterdam and optionally projects in Singapore, Dubai and Bangkok. Tokyo would be perfect, yet they are fiercely competitive and the Japanese feel nationalistic on Japanese and at times more important, driven towards non-Chinese goods. In the end, Huawei would need to give in too much per inch of market share, not worth it I reckon, yet the options that Huawei has available might also include growing the tourist fields where they can grow market share through data service options, especially if the can Google to become part of this (in some places). In the end, the stage is still valid to see Huawei become the biggest 5G player in the field.

Then we get the first part of the main event. With: “It started working on customised chips to handle complex algorithms on hardware before the cloud companies did. Research firm Alliance Bernstein estimates that HiSilicon is on pace for $US7.6 billion in sales this year, more than doubling its size since 2015. “Huawei was way ahead of the curve,” said Richard, the analyst.” we see something that I have tried to make clear to the audience for some time.

June 2018: ‘Telstra, NATO and the USA‘ (at https://lawlordtobe.com/2018/06/20/telstra-nato-and-the-usa/) with: “A failing on more than one level and by the time we are all up to speed, the others (read: Huawei) passed us by because they remained on the ball towards the required goal.

September 2018: ‘One thousand solutions‘ (at https://lawlordtobe.com/2018/09/26/one-thousand-solutions/) with: “we got shown 6 months ago: “Huawei filed 2,398 patent applications with the European Patent Office in 2017 out of a total of 166,000 for the year“, basically 1.44% of ALL files European patents were from that one company.

Merely two of several articles that show us the momentum that Huawei has been creating by stepping away from the iterative mobile business model and leaping technologically ahead one model after the other. If you look at the history of the last few years, Huawei went from P7, Mate 10, Nova 3i and Mate 20 Pro. These 4 models in a lifecycle timeline have been instrumental for them and showing the others that there is fierce competition. The P7, a mere equal to the Samsung Galaxy 4 in its day, yet 43% cheaper for the consumer, and now they are at the Mate 20 Pro, which is 20% cheaper than the Samsung Galaxy Note9 and regarded as better in a few ways. In 4 cycles Huawei moved from optionally a choice to best in the field and still cheaper than most. That is the effect of leaping forward and they are in a place where they can do the same in the 5G field.

We are confronted with the drive with the statement: “Huawei is throwing everything into its cloud package. It recently debuted a set of AI software tools and in October released a new specialised chip, called the Ascend. “No other chip set has this kind of capability of processing,” Qiu said.” This viewed advantage is still a loaded part because there is the fact that China is driven towards growing the AI field, where they, for now have a temporary disadvantage. We might see this as a hindrance, yet that field is only visible in the governmental high end usage that there is and consumers like you and me will not notice this, those who claim it and create some elaborate ‘presentation’ into making the water look muddy. When your life is about Twitter, LinkedIn and Facebook, you will never notice it. In the high end usage, where AI is an issue, they are given the cloud advantage that others cannot offer to the degree that is available to non-governmental players (well, that is what it looks like and that is technologically under consideration, yet it does look really nice).

When we look towards the future of Huawei we clearly see the advantages of the Middle East, especially Saudi Arabia, UAE and optionally Qatar if they play their cards right. Latin America is an option, especially if they start in Argentina, where they could optionally add Uruguay overnight, branching out towards Chile and Paraguay will be next leaving the growth towards Brazil. Yet in that same strategy add Venezuela and Colombia first would enable several paths. The business issue remains, yet being the first to have an additional appeal and if it pisses off the Americans Venezuela gets on board fast often enough. The issue is more than technological. The US still has to prove to the audience that there is a 5G place for them all and the infrastructure does not really allow for it at present, merely the metropolitan areas where the money is, driving inequality in the USA even further.

If visibility is the drive than Huawei is very much on the right track and they are speeding that digital super highway along nicely. Yet in opposition to all this is the final paragraph in the SMH. When we see: ““As long as they stick to the game plan, they still have a lot of room to grow,” he said. “Unless the US manages to get their allies to stop buying them.”” This is a truth and also a reassurance. You see the claim ‘Unless the US manages to get their allies to stop buying them‘, gets us to an American standard. It was given to us by the X-Files in the movie with the same name, or perhaps better stated Chris Carter gave it to us all. The end he gives us: “He is but one man. One man alone cannot fight the future“, it equally applies to governments too, they might try to fight the future, yet in the end, any nation is built from the foundation of people, stupid or not, bright or less so, the larger group can do arithmetic and when we are confronted with a Huawei at $450, or an Apple iPhone at $2350, how many of you are desperately rich enough to waste $1900 more on the same functionality? Even when we add games to the larger three (Facebook, LinkedIn & Twitter), most phones will merely have an optional edge and at $1900? Would you pay for the small 10% difference that 1-3 games optionally offer? And let’s not forget that you will have to add that difference again in 2 years when you think that you need a new phone. The mere contemplation of optimised playing free games at $77 a month makes total sense doesn’t it? So there we see the growth plan of Huawei, offering the top of the mountain at the base price and those in denial making these unsubstantiated ‘security risk’ claims will at some point need to see the issue as Verizon is the most expensive provider in the US, So when I see $110 per month for 24 GB of shared data, whilst I am getting 200GB for $50, I really have to take an effort not to laugh out loud. That is the 5G world, the US faces and whilst there was an option for competitive players in the US, the Huawei block is making sure that some players will rake in the large cash mountain for much longer and there others are making fun of my predictions, and now that I am proven to be correct, they are suddenly incommunicado and extremely silent.

As such, when I predicted that the US is now entering a setting where they end up trailing a field that they once led, we will see a lot of growth of Chinese interests. In all this, do you really think that it will stop at a mere 5G walkie talkie? No, with 5G automation and deeper learning, we will see a larger field of dash boarding, information and facilitation to the people and Huawei will optionally rule that field soon enough, with a few non Americans nipping at their heels for dominance because that is the nature of the beast as well. Progress is a game for the hungry and some players (specifically the US) have forgotten what it was like to be hungry. Australian Telstra made similar mistakes and moved their Share price of $6.49 to $3.08 in the stage of 3 years, a 52% loss of value, and when (not if) Huawei pushed the borders all over the place, those people with a Verizon Protective State of Mind will end up seeing Verizon going in a similar setting, because that is also the consequence of adhering to what I would consider to be a form of nationalistic nepotism. The UK already had its ducks in a row for the longest of times (and that island has less ground to cover, which is a distinct advantage), so there BT has options for now and over time they might adhere to some of their policies as is required, the US is not in that good a position and Huawei merely needs to flash a medium purse of cash to show the people in the US that a place like Buenos Aires can offer the masses more and faster than those on better incomes in the US, because the pricing model allows for such a shift.

In this the problem is not a short term one, even as US giants are supposed to have the advantage, we also see that the workforce is not properly adhered to, the US (and the UK) have a massive, not a large, but a massive disadvantage when it comes to STEM students, a disadvantage that China does not have. The AI field is not something that is solved over the next 3 years, so as those with educations in Science, Technology, Engineering and Mathematics is dwindling to some degree in commonwealth nations and America, China can move full steam as the next generation is pushed into high end ambition and careers. As such the entire AI shortfall against America can be overcome much easier by places like China and India at present. It is not merely the stage of more graduated students; it is about groups of graduated students agreeing on paths towards breakthrough solutions. No matter how savant one student is, a group is always more likely to see the threat and weakness of a certain path and that is where the best solution is found faster.

Will we ‘Fight the Future’?

The issue is not the American polarised view, it is the correctly filtered view that Alex Younger gave us initially, it is not incorrect to have a nationalistic protective view and Alex gave the correct stage on having a national product to use, which is different from the Canadian and Australian path proclaimed. We agree that it is in a national required state to have something this critical solved in a national way (when possible that is), in this the path to have a Huawei 5G stage and then reengineer what is required is not wrong, yet it is optionally with a certain risk and when that path is small enough, it is a solution. The UK is largely absolved as it had BT with the foundations of the paths required, just as Australia has Telstra, yet some countries (like Australia) become too complacent, BT was less complacent and they have knowledge, yet is it advanced enough? We agree that they can get up to speed faster, yet will it be fast enough? I actually do not know, I have no data proving the path in one direction or the other. What is clear is that a race with equal horses provides the best growth against one another, the competitiveness and technological breakthroughs that we have seen for the longest time. That path has largely been made redundant in the US and Australia (I cannot say for certain how that is in Canada).

Even as Huawei is gaining speed and being ahead of it all is still a race by one player, the drive to stay ahead is only visible on the global field, and it is an uncertain path, even if they have all the elements in their favour, what is clear is that this advantage will remain so for the next 5 years and unless certain nations make way for budgets growing the STEM pool by well over 200% their long term disadvantage remains in place.

The versusians

In this stage we need to look in the pro and con Huawei field. In the pro field, as Huawei set the stage for global user growth, which they are seemingly doing, they have the upper hand and they will grow to a user base that grows from servicing a third of the internet users to close to 50%, that path is set with some certainty and as such their advantage grows. In the opposition of that, players like need to step away from the political empty headed failure of enabling the one champion stage of Verizon and Telstra, diversity would give the competitive drive and now it is merely Telstra versus Vodafone/TPG, is means that there will be a technological compromise stage where none of the two surges ahead giving players like Huawei a much larger advantage to fuel growth,

How wrong am I likely to be?

So far I have been close to the mark months in advance compared to the big newspapers only giving partial facts long after I saw it coming, so I feel that I remain on the right track here. The question is not merely who has the 5G stage first, it will be who will facilitate 5G usage more complete and earlier than the others, because that is where the big number of switchers will be found and players like TPG and Vodafone have seen the impact of switchers more than once, so they know that they must be better and more complete than the other brand. Huawei knows it too, they saw that part and are still seeing the impact that goes all the way back to the P7, and that is where Apple also sees more losses, We were informed a mere 9 hours ago: “Piper Jaffray cuts its Apple (NASDAQ:AAPL) price target from $250 to $222 saying that recent supplier guidance cuts suggest “global unit uptake has not met expectations.”” another hit of a loss to face, optionally a mere 11.2% yet in light of the recent losses, they faced, we see what I personally feel was the impact of the ridiculous stage of handing the audience a phone of $2369, optionally 30% more expensive than the choice after that one, even if the number two is not that much less in its ability. The stage where marketeers decide on what the people need, when they all need something affordable. It personally feels like the iMac Pro move, a $20K solution that less than 0.3% of the desktop users would ever need, and most cannot even afford. That is driving the value of Apple down and Huawei knows that this egocentric stage is one that Apple et al will lose, making Huawei the optional winner in many more places after the first 5G hurdles are faced by all.

Do you still think that Apple is doing great? A company that went from a trillion to 700 billion in less than 10 weeks, which is an opportunity for the IOS doubters to now consider Huawei and Samsung, even as Huawei will statistically never get them all, they will get a chunk and the first move is that these users moved away from IOS, and as Android users they are more easily captured towards user hungry players like Huawei by its marketing, that is the field that has changed in the first degree and as people feel comfortable with Huawei, they will not consider getting more Huawei parts (like routers for the internet at home) and that continues as people start moving into the 5G field. You see, we can agree that it is mere marketing (for now), yet Huawei already has its 5G Customer-premises Equipment (as per March 2018). this implies that with: “compatible with 4G and 5G networks, and has proven measured download speeds of up to 2Gbps – 20 times that of 100 Mbps fiber“, that they can buy their router now, remain on 4G and when their local telecom is finally ready, 5G will kick in when the subscription is correct. It is as far as I can tell the first time that government telecom procedures are vastly behind the availability to the consumer (an alleged speculation from my side).

Do you think that gamers and Netflix people will not select this option if made available? That is what is ahead of the coming options and that is the Future that some are fighting. It is like watching a government on a mule trying to do battle with a windmill, the stage seems that ridiculous and as we move along, we will soon see the stage being ‘represented’ by some to state on the dangers that cannot (or are ignored) to be proven.

The moment other devices are set towards the 5G stage, that is when more and more people will demand answers from industrial politicians making certain claims and that is when we see the roller-coaster of clowns and jesters get the full spotlight. This is already happening in Canada (at https://www.citynews1130.com/2018/12/13/huawei-and-5g-experts-clash-on-the-risk-to-canadas-national-security/), where City News (Ottawa) gives us: “I can’t see many circumstances, other than very extreme ones, in which the Chinese government would actually risk Huawei’s standing globally as a company in order to conduct some kind of surveillance campaign“, something I claimed weeks ago, so nice for the Canadian press to catch up here, in addition when we are given: ““This can be used for a lot of things, for manipulation of businesses to harvesting of intellectual property,” Tobok said. “On a national security level, they can know who is where at any given time. They can use that as leverage to jump into other operations of the government.” those people knowingly, willingly and intentionally ignore the fact that Apps can do that and some are doing it already. The iPhone in 2011 did this already. We were given: “Privacy fears raised as researchers reveal file on iPhone that stores location coordinates and timestamps of owner’s movements“, so when exactly was the iPhone banned as a national security hazard? Or does that not apply to any Commonwealth nation when it is America doing it? Or perhaps more recent (January 2018), when Wired gave us: “the San Francisco-based Strava announced a huge update to its global heat map of user activity that displays 1 billion activities—including running and cycling routes—undertaken by exercise enthusiasts wearing Fitbits or other wearable fitness trackers. Some Strava users appear to work for certain militaries or various intelligence agencies, given that knowledgeable security experts quickly connected the dots between user activity and the known bases or locations of US military or intelligence operations.” So when Lt. Walksalot was mapping out that secret black site whilst his Fitbit was mapping that base location every morning job, was the Fitbit banned? Already proven incursions on National security mind you, yet Huawei with no shown transgressions is the bad one. Yes, that all made perfect sense. I will give Wesley Wark, a security and intelligence specialist who teaches at the University of Ottawa a pass when he gives us: “Still, Canada can’t afford to be shut out of the Five Eyes or play a diminished role in the alliance, and if Britain decides to forbid Huawei from taking part in its 5G networks, Canada could not be the lone member to embrace the company“, OK that is about governmental policy, not unlike Alex Younger there is a claim to be made in that case, not for the risk that they are or might be, but the setting that no government should have a foreign risk in place. This is all fine and good, but so far the most transgressions were American ones and that part is kept between the sheets (like catering to IBM for decades), or leaving the matter largely trivialised.

It is pointless to fight the future, you can merely adhere to swaying the direction it optionally faces and the sad part is that this sway has forever been with those needing to remain in power, or to remain in the false serenity that status quo brings (or better stated never brings). True innovation is prevented from taking grasp and giving directional drive and much better speeds and that too is something to consider, merely because innovation drives IP, the true currency of the future and when we deny ourselves that currency we merely devaluate ourselves as a whole. In this we should agree that denying innovation has never ever resulted in a positive direction, history cannot give us one example when this worked out for the best of all.

 

Leave a comment

Filed under Finance, IT, Media, Military, Politics, Science

Paul Simon song appplication

I grew up in the 70’s, actually I started to grow up a lot longer before that, but the 70’s were sweet. It was about music and creativity so without even knowing the years flew by, they were quality years. Things were in a good place for nearly everyone and I looked around on all the wonders that were there to behold. In that time we all knew Simon and Garfunkel and soon thereafter we knew the songs of Paul Simon. The album showed, still a young sprout at that time, dressed in jeans with shirt and hat, an alternative Indiana Jones, who would actually not show for another 6 years, so Paul Simon became a trendsetter too.

In this we take a look at some of the tracks and the impact that their 2018 remastered editions hold.

  1. Still Failing After All These Years

Yes, it is everyone’s favourite piñata of technology. It’s about IBM, who reportedly gives us ‘the 5 percent revenue growth in its latest quarter came from the 10 percent decline in the value of the US dollar‘, which sounds nice, but is IBM not that growing behemoth tailoring Watson, left, right, and south of the border? Well, it seems that this is merely a side play to what the insiders call “we are all familiar with IBM’s strategy to shift sales from traditional low-margin businesses to what it calls “strategic imperatives”, such as cloud services, AI, security, blockchain and quantum computing. However, this is not a separate division, and IBM does not break out the numbers. It claimed that SI revenues were up by 15 percent, or by 10 percent at constant currency. That isn’t impressive in a booming market” (source: ZDNET). I personally think that the further you are away from ‘isn’t impressive’ the better you look, you see, the part not shown here is the one that End Gadget gave us. that is seen with the title ‘IBM’s Watson reportedly created unsafe cancer treatment plans‘, with the additional quote “the AI is still far from perfect: according to internal documents reviewed by health-oriented news publication Stat, some medical experts working with IBM on its Watson for Oncology system found “multiple examples of unsafe and incorrect treatment recommendations”. In one particular case, a 65-year-old man was prescribed a drug that could lead to “severe or fatal haemorrhage” even though he was already suffering from severe bleeding“. Now, we can understand that a system like that will falter at times. Yet the setting could have been presented when the people behind Watson would have taken the knowledge of IT experts that have known since the early 80’s that the application of the GIGO law must always be checked for. The GIGO law, or as it is stated the ‘Garbage In, Garbage Out Law‘ has been available for the sceptical mind for well over three decades.

This is not me in some anti-AI mind. I think that AI can do great things, yet to look at cancer treatment recommendations when the medical world still have to figure out plenty towards cancer in the first place also implies that there will be plenty of untested situations there (and many more unknown elements); so IBM bit of a lot more than they could chew. Now if they hire Rob Becket as a spokesperson, then there is at least the chance that the biting part is taken part of, digesting the amounts of data will be up to IBM, some things they will just have to learn for themselves.

 

  1. My Little Town

Issue skipped as it has religious elements that will set political correctness in an unbalanced nature.

  1. I Do It for Your Love

It might have been a topic, yet with well over 40% getting divorced, I would be required to give an unfaithful setting towards the forecasting of trends, which is where Watson comes into play again and that system will make the wrong anticipation, just like chocolate shoes is likely to have on one of the parties in any marital contract. If that would not have been an issue, we see a long term setting of statistical outliers where any AI and the population at large will reject the setting of the song.

  1. 50 Ways to Irradiate Your Lover

There is a topic we can sing about. We have all seen the setting where the lovers left had to resort to revenge porn to get their jollies up. In all this we see that tinker, tailor, soldier and spy are all involved, the soldier is sued, a major from Fort Bragg. I knew the people there, in many cases not really the most intelligent bunch to say the least, but that does not excuse, ignorance is no defence as any law student might know. So even as Adam Matthew Clark is seemingly involved with an army gynaecologist named Kimberly Rae Barrett, so basically he replaced his porn needs with a woman who knows how to squeeze the tomatoes and knows where they are. In the setting it is still part of that well known 40% and in this we see that the laws have been updated. Tumblr has updated the settings with the mention that explicitly ban hate speech, glorifying violence, and revenge porn will be cast out. No one states that this is a bad idea, yet the setting is that 9/11 this year will be the first day that all that is no longer allowed, so how will that go over?

All great songs and the fact that this album jumped into my mind made perfect sense. In a time when we were all set upon the optional wonders that the world had to bring, we are now set into payback, PayPal, revenge and misstated intentional miscommunications.

It is a setting that tends to be devastating to the creative mind. Not merely a concept, it is a book by Margaret Boden. A part matters in all this, because we see that the Creative mind is more than just a search towards the within. It is also the place where we can surpass ourselves.

Drawing on examples ranging from chaos theory to Coleridge’s theory of imagination; using the idea that creativity involves the exploration of conceptual spaces in people’s minds, we see a description of these spaces and ways of producing new ones. In the setting it is a perpetual engine never stopping, feeding itself iteration after iteration until something completely new is found and that too gets digested by the mind, it curiosity flags require it to do so. So when we consider that the creativity requires a much different handle, we can state the obvious and call Watson to some extent a failure, that is until the medical setting is given the question on constipation, when Watson MD stops for 60 seconds and states ‘It is not out yet!‘, that will be the first victory for IBM, because when the system can set dimensionality past the clinical application of text, only then will it look in directions the creative mind would have considered to find the equation of nature, at that point will it become the path to a victory and that is where their spokesperson (Rob Beckett) really goes to town. when his teeth produces the dam to the water inlet of the New Bong Hydroelectric Power Complex in Pakistan, when the IBM software gets to contemplate water shortage and drought, that will be the victory that IBM needs, it seems to consider the wrong flags in the wrong places and what to do when there is no water is a first step in properly solving the issues. That was seen when the IBM users were confronted with ‘SHUTDOWN -F MAY REBOOT INSTEAD OF HALT‘, so when you restart a power plant, when there is no juice to start, it seems that this is not a biggie, it merely melts a few parts, now consider that the setting is not merely a water plant, but the setting is ‘USERS AFFECTED: All IBM Maximo for Nuclear Power users‘ and we are confronted with “NUC7510-SQL ERROR WHEN FILTERING IN ROUNDS TAB (DUTY STATION (NUC)) ON THE NEW READING DUE DATE FIELD“, now also consider that this is directly linked to: “Maximo for Nuclear Power provides enterprises with best practices for managing all types of nuclear equipment, tracking regulatory requirements, and enhancing operational and work management practices“, is it still merely an academic exercise for you? You see, the basic error is that too many people are developers relaying on black and white truths, they consider the true and the false setting of a flag and nine out of 10 they forget about the null setting of that same flag meaning that essential steps were not properly set, a basic error that everyone (no exception) gets to be confronted with, now also realise that Watson is merely a developed system that is large enough to forget settings because a few thousand flags were wrongfully set (actually unintentional mind you), so when the setting is a stage that is not a cancer treatment, but a nuclear power facility that is AI driven (the wet sexual fantasy of too many IBM board members) then we get a real problem, because it is not the 1000 test scenario’s it is the one we did not consider through natures spasms that gets into the wires and at that point we all go nuts and not merely because of the fallout. So when we are confronted with the settings of mere truths and we add last year’s news “AREVA NP has joined forces with IBM’s Watson IoT advanced analytics platform. This partnership helps utilities implement big data solutions for the nuclear industry. Utilities can use this integrated data intelligence to predict the when, where and why of component operations and performance, as well as the consequences of component issues“, with a false treatment one person bites the dust, what do you think happens when they get it wrong in an operational nuclear power plant? It might have merely three sections, but those sections have a little over 706,329 parts (a really rough estimation) and not all are monitored. Even as I designed a way to meltdown an Iranian nuclear power plant from within without having to go into any control room, I can also tell you that Watson will not be ready for that eventuality. So at that point, when it can be done to any power plant, how dangerous is the setting when we see that those with knowledge are seeing that Watson made critical errors that was given with ‘In one particular case, a 65-year-old man was prescribed a drug that could lead to “severe or fatal haemorrhage” even though he was already suffering from severe bleeding‘, a basic danger not covered by the system, what else might have gone wrong that the doctors did not anticipate? That can happen under any condition to no flaw to the physician in any way. I think that IBM is punching the envelope (not pushing it) to seem more astronomical in their approach. The most basic of marketing flaws in an age where marketing wold never be held accountable. So when you see Chernobyl (CA) USA, and IBM marketing states ‘Not my problem‘, how will you feel (besides irradiated that is)?

Yet there is an upside in all this, because the: ‘Comic Book Authorities’ tell us that glowing in the dark improves road safety for pedestrians at night

Sometimes an old song leads to a new song that shows and teaches us that creativity is more than finding new paths; it is the knowledge that adjusting and evolving old paths that are equally rewarding in many ways.

Leave a comment

Filed under IT, Media, Military, Politics, Science

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science

Waking up 5 years late

I have had something like this, I swear it’s true. It was after I came back from the Middle East, I was more of a ‘party person’ in those days and I would party all weekend non-stop. It would start on Friday evening and I would get home Sunday afternoon. So one weekend, I had gone through the nightclub, day club, bars and Shoarma pit stops after which I went home. I went to bed and I get woken up by the telephone. It is my boss, asking me whether I would be coming to work that day. I noticed it was 09:30, I had overslept. I apologised and rushed to the office. I told him I was sorry that I had overslept and I did not expect too much nose as it was the first time that I had overslept. So the follow up question became “and where were you yesterday?” My puzzled look from my eyes told him something was wrong. It was Tuesday! I had actually slept from Sunday afternoon until Tuesday morning. It would be the weirdest week in a lifetime. I had lost an entire day and I had no idea how I lost a day. I still think back to that moment every now and then, the sensation of the perception of a week being different, I never got over it, now 31 years ago, and it still gets to me every now and then.

A similar sensation is optionally hitting Christine Lagarde I reckon, although if she is still hitting the party scene, my initial response will be “You go girl!

You see with “Market power wielded by US tech giants concerns IMF chief” (at https://www.theguardian.com/business/2018/apr/19/market-power-wielded-by-us-tech-giants-concerns-imf-chief-christine-lagarde) we see the issues on a very different level. So even as we all accept “Christine Lagarde, has expressed concern about the market power wielded by the US technology giants and called for more competition to protect economies and individuals”, we see not the message, but the exclusion. So as we consider “Pressure has been building in the US for antitrust laws to be used to break up some of the biggest companies, with Google, Facebook and Amazon all targeted by critics“, I see a very different landscape. You see as we see Microsoft, IBM and Apple missing in that group, it is my personal consideration that this is about something else. You see Microsoft, IBM and Apple have one thing in common. They are Patent Powerhouses and no one messes with those. This is about power consolidation and the fact that Christine Lagarde is speaking out in such a way is an absolute hypocrite setting for the IMF to have.

You see, to get that you need to be aware of two elements. The first is the American economy. Now in my personal (highly opposed) vision, the US has been bankrupt; it has been for some time and just like the entire Moody debacle in 2008. People might have seen in in ‘the Big Short‘, a movie that showed part of it and whilst the Guardian reported ““Moody’s failed to adhere to its own credit-rating standards and fell short on its pledge of transparency in the run-up to the ‘great recession’,” principal deputy associate attorney general Bill Baer said in the statement“, it is merely one version of betrayal to the people of the US by giving protection to special people in excess of billions and they merely had to pay a $864m penalty. I am certain that those billionaires have split that penalty amongst them. So, as I stated, the US should be seen as bankrupt. It is not the only part in this. The Sydney Morning Herald (at https://www.smh.com.au/business/the-economy/how-trump-s-hair-raising-level-of-debt-could-bring-us-all-crashing-down-20180420-p4zank.html) gives us “Twin reports by the International Monetary Fund sketch a chain reaction of dangerous consequences for world finance. The policy – if you can call it that – puts the US on an untenable debt trajectory. It smacks of Latin American caudillo populism, a Peronist contagion that threatens to destroy the moral foundations of the Great Republic. The IMF’s Fiscal Monitor estimates that the US budget deficit will spike to 5.3 per cent of GDP this year and 5.9 per cent in 2019. This is happening at a stage of the economic cycle when swelling tax revenues should be reducing net borrowing to zero“. I am actually decently certain that this will happen. Now we need to look back to my earlier statement.

You see, if the US borrowing power is nullified, the US is left without any options, unless (you saw that coming didn’t you). The underwriting power of debt becomes patent power. Patents have been set to IP support. I attended a few of those events (being a Master of Intellectual Property Law) and even as my heart is in Trademarks, I do have a fine appreciation of Patents. In this the econometrics of the world are seeing the national values and the value of any GDP supported by the economic value of patents.

In this, in 2016 we got “Innovation and creative endeavors are indispensable elements that drive economic growth and sustain the competitive edge of the U.S. economy. The last century recorded unprecedented improvements in the health, economic well-being, and overall quality of life for the entire U.S. population. As the world leader in innovation, U.S. companies have relied on intellectual property (IP) as one of the leading tools with which such advances were promoted and realized. Patents, trademarks, and copyrights are the principal means for establishing ownership rights to the creations, inventions, and brands that can be used to generate tangible economic benefits to their owner“, as such the cookie has crumbled into where the value is set (see attached), one of the key findings is “IP-intensive industries continue to be a major, integral and growing part of the U.S. economy“, as such we see the tech giants that I mentioned as missing and not being mentioned by Christine Lagarde. It is merely one setting and there are optionally a lot more, but in light of certain elements I believe that patents are a driving force and those three have a bundle, Apple has so many that it can use those patents too buy several European nations. IBM with their (what I personally believe to be) an overvalued Watson, we have seen the entire mess moving forward, presenting itself and pushing ‘boundaries’ as we are set into a stage of ‘look what’s coming’! It is all about research, MIT and Think 2018. It is almost like Think 2018 is about the point of concept, the moment of awareness and the professional use of AI. In that IBM, in its own blog accidently gave away the goods as I see it with: “As we get closer to Think, we’re looking forward to unveiling more sessions, speakers and demos“, I think they are close, they are getting to certain levels, but they are not there yet. In my personal view they need to keep the momentum going, even if they need to throw in three more high exposed events, free plane tickets and all kinds of swag to flim flam the audience. I think that they are prepping for the events that will not be complete in an alpha stage until 2020. Yet that momentum is growing, and it needs to remain growing. Two quotes give us that essential ‘need’.

  1. The US Army signed a 33-month, $135 million contract with IBM for cloud services including Watson IoT, predictive analytics and AI for better visibility into equipment readiness.
  2. In 2017, IBM inventors received more than 1,900 patents for new cloud technologies to help solve critical business challenges.

The second is the money shot. An early estimate is outside of the realm of most, you see the IP Watchdog gave us: “IBM Inventors received a record 9043 US patents in 2017, patenting in such areas as AI, Cloud, Blockchain, Cybersecurity and Quantum Computing technology“, the low estimate is a value of $11.8 trillion dollars. That is what IBM is sitting on. That is the power of just ONE tech giant, and how come that Christine Lagarde missed out on mentioning IBM? I’ll let you decide, or perhaps it was Larry Elliott from the Guardian who missed out? I doubt it, because Larry Elliott is many things, stupid ain’t one. I might not agree with him, or at times with his point of view, but he is the clever one and his views are valid ones.

So in all this we see that there is a push, but is it the one the IMF is giving or is there another play? The fact that banks have a much larger influence in what happens is not mentioned, yet that is not the play and I accept that, it is not what is at stake. There is a push on many levels and even as we agree that some tech giants have a larger piece of the cake (Facebook, Google and Amazon), a lot could have been prevented by proper corporate taxation, but that gets to most of the EU and the American Donald Duck, or was that Trump are all about not walking that road? The fact that Christine has failed (one amongst many) to introduce proper tax accountability on tech giants is a much larger issue and it is not all on her plate in all honesty, so there are a few issues with all this and the supporting views on all this is not given with “Lagarde expressed concern at the growing threat of a trade war between the US and China, saying that protectionism posed a threat to the upswing in the global economy and to an international system that had served countries well“, it is seen in several fields, one field, was given by The Hill, in an opinion piece. The information is accurate it is merely important to see that it has the views of the writer (just like any blog).

So with “Last December, the United States and 76 other WTO members agreed at the Buenos Aires WTO Ministerial to start exploring WTO negotiations on trade-related aspects of e-commerce. Those WTO members are now beginning their work by identifying the objectives of such an agreement. The U.S. paper is an important contribution because it comprehensively addresses the digital trade barriers faced by many companies“, which now underlines “A recent United States paper submitted to the World Trade Organization (WTO) is a notable step toward establishing rules to remove digital trade barriers. The paper is significant for identifying the objectives of an international agreement on digital trade“. This now directly gives rise to “the American Bar Association Section of Intellectual Property Law also requested that the new NAFTA require increased protections in trade secrets, trademarks, copyrights, and patents“, which we get from ‘Ambassador Lighthizer Urged to Include Intellectual Property Protections in New NAFTA‘ (at https://www.jdsupra.com/legalnews/ambassador-lighthizer-urged-to-include-52674/) less than 10 hours ago. So when we link that to the quote “The proposals included: that Canada and Mexico establish criminal penalties for trade secrets violations similar to those in the U.S. Economic Espionage Act, an agreement that Mexico eliminate its requirement that trademarks be visible, a prohibition on the lowering of minimum standards of patent protection“. So when we now look back towards the statement of Christine Lagarde and her exclusion of IBM, Microsoft and Apple, how is she not directly being a protectionist of some tech giants?

I think that the IMF is also feeling the waters what happens when the US economy takes a dip, because at the current debt levels that impact is a hell of a lot more intense and the games like Moody’s have been played and cannot be played again. Getting caught on that level means that the US would have to be removed from several world economic executive decisions, not a place anyone in Wall Street is willing to accept, so that that point Pandora’s Box gets opened and no one will be able to close it at that point. So after waking up 5 years late we see that the plays have been again and again about keeping the status quo and as such the digital rights is the one card left to play, which gives the three tech giants an amount of power they have never had before, so as everyone’s favourite slapping donkey (Facebook) is mentioned next to a few others, it is the issue of those not mentioned that will be having the cake and quality venison that we all desire. In this we are in a dangerous place, even more the small developers who come up with the interesting IP’s they envisioned. As their value becomes overstated from day one, they will be pushed to sell their IP way too early, more important, that point comes before their value comes to fruition and as such those tech giants (Apple, IBM, and Microsoft) will get an even more overbearing value. Let’s be clear they are not alone, the larger players like Samsung, Canon, Qualcomm, LG Electronics, Sony and Fujitsu are also on that list. The list of top players has around 300 members, including 6 universities (all American). So that part of the entire economy is massively in American hands and we see no clear second place, not for a long time. Even as the singled out tech giants are on that list, it is the value that they have that sets them a little more apart. Perhaps when you consider having a go at three of them, whilst one is already under heavy emotional scrutiny is perhaps a small price to pay.

How nice for them to wake up, I merely lost one day once, they have been playing the sleeping game for years and we will get that invoice at the expense of the futures we were not allowed to have, if you wonder how weird that statement is, then take a look at the current retirees, the devaluation they face, the amount they are still about to lose and wonder what you will be left with when you consider that the social jar will be empty long before you retire. The one part we hoped to have at the very least is the one we will never have because governments decided that budgeting was just too hard a task, so they preferred to squander it all away. The gap of those who have and those who have not will become a lot wider over the next 5 years, so those who retire before 2028 will see hardships they never bargained for. So how exactly are you served with addressing “‘too much concentration in hands of the few’ does not help economy“, they aren’t and you weren’t. It is merely the setting for what comes next, because in all this it was never about that. It is the first fear of America that counts. With ‘US ponders how it can stem China’s technology march‘ (at http://www.afr.com/news/world/us-ponders-how-it-can-stem-chinas-technology-march-20180418-h0yyaw), we start seeing that shift, so as we see “The New York Times reported on April 7 that “at the heart” of the trade dispute is a contest over which country plays “a leading role in high-tech industries”. The Wall Street Journal reported on April 12 that the US was preparing rules to block Chinese technology investment in the US, while continuing to negotiate over trade penalties“, we see the shifted theatre of trade war. It will be about the national economic value with the weight of patents smack in the middle. In that regard, the more you depreciate other parts, the more important the value of patents becomes. It is not a simple or easy picture, but we will see loads of econometrics giving their view on all that within the next 2-3 weeks.

Have a great weekend and please do not bother to wake up, it seems that Christine Lagarde didn’t bother waking up for years.

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science