Tag Archives: IBM

Overpricing or Segregation?

What is enough in a PC? That is the question many have asked in the past. Some state that for gaming you need the max hardware possible; for those using a word processor, a spreadsheet, email and browse the internet, the minimum often suffices.

I have been in the middle of that equation for a long time; I was for well over a decade in the high end of it, as gaming was my life. Yet, the realisation became more and more that high end gaming is a game for those with high paying jobs was a reality we all had to face. Now we see the NVIDIA GeForce GTX Titan Xp 12GB GDDR5X Video Card at $1950, whilst we can do 4K gaming and that one card is a 4K 65″ TV with either the Xbox X or the PS4 pro. Now consider that this is merely the graphics card and that the high end PC requires an additional $2K that is where the PC with 4K gaming requires 4 thousand dollars. It is a little stretch, because you can get there with a little less, but then also the less requires the hardware to be replaced quicker. So I moved to console gaming and I never regretted it. We all agree that I have lost out, but I can live with that. I can truly enjoy gaming without the price. So in this situation, can someone explain to me how the new iMac Pro will cost you in its maximum setting $20,743? Is there any justification to need such an overpowered device? I reckon that those into professional video editing might need it, but when we consider those 43 people in Australia (on that high level) who else does it benefit?

In comparison, a maximised Mac Pro costs you $11,617, so it is almost 50% cheaper. Now the comparison is not fair because the iMac Pro has an optional 4TB SSD drive, and that is not a cheap item, but the issue is that the overpowering of hardware might seem cool and nice, but let’s be fair, when we compare this through MS Word, we see the issue. The bulk of all people will never use more than 20% of that text editor, which is a reality we face yet at $200 we do not care, take the price a hundred fold, with $20,000 in the balance it adds up and even as MS Word has one version the computers do have options, and a lesser option is available, in this, that new iMac Pro is in minimum configuration $7K and at twice the price of a 4K gaming machine, with no real option for gaming, is that not a system that is over the top?

Now, some might think it is, some will state it is not and it is really in the eyes of the beholder. Yet in this day and age, when we have been thrusted into a stage where mobiles and most computer environments are set to a 2-4 year stage at best, how should we see the iMac pro? In addition, where the base model of the pro is 100% more expensive than the upgraded iMac 27″, is there a level of disjointed presentation?

Well, some do not think in that way and they are right to see it as such. One source (ZDNet) gives us: “The iMac Pro is aimed at professionals working with video (a lot of video), those into VR, 3D modeling, simulations, animation, audio engineers and such“, a view I wholeheartedly agree with, yet that view and that image has not been given when we see the marketing, the Apple site and even the apple stores. Now, first off, the apple stores have not been misleading, most have kept to some strict version of ‘party line’ and that is not a wrong stance. Also the view that ZDNet gives us at the end is spot on. With “It’s Mac for the 1 percent of Mac users, not the 99 percent. For the 99 percent, yes, the iMac Pro is overpriced and just throwing away money, but for the 1 percent who need the sort of power that a system like that can generate, it’s very reasonably priced” and that is where we see the issue, Mac is now segregating the markets trying to get the elite back into the Mac fold. Their timing is impeccable. Microsoft made a mess of things and with the gaming industry in the chaotic view of hardware the PC industry has become a mess. It moved towards the gamers who now represent $100 billion plus already we see that others went on the games routine whilst to some extent ignoring the high end graphical industry. It is something that I have heard a few times and to be honest, I ignored it. I grew there whilst being completely aware of all the hardware, which was 15-25 years ago. The graphical hardware market grew close to 1000%, so when I needed to dig into the PC hardware for another reason, I was amazed just how much there was and how affordable some stuff was, but in the highest gaming tier, the one tier where the gamer and high end video editing need overlaps, we see a lag, because selling to 99 gamers and one video editor means that most will not give a toss about the one video editor. Most will know what they need, but that market is not well managed. Issues like video drivers and Photoshop CC 2017 against Windows 10 are just a few of the dozens upon dozens of issues that seems to plague these users. Important is that this is not just some Adobe issue; it seems that the issues are still in a stage of flux. With “Microsoft warned that the April 2017 security update package has a known issue that could affect users’ computers and which the company is seeking to fix” a few months ago, we are starting to see more and more that Windows forgot that its core was not merely the gamer, it was an elite user group that it had slowly snagged away from Apple and now Apple is striking back in the best way possible, by giving them that niche again, by pushing these people with money away, they might soon see that the cutting edge Azure targets for high end graphic applications become a pool of enjoyment for the core Microsoft Office users. A market that they are targeting just as Apple gets its ducks in a row and snatches that population away from them.

That is indeed a clever move, because that was the market that made Apple great in the first place. So as we read on how Azure is aiming for the ArcGIS Pro population, we see that Apple has them outgunned and outclassed and not by a small amount either. Here the iMac Pro could be the difference between real time prototyping and anticipated results awaiting aggregation. That would instantly make the difference between a shoddy $5K-$8K gaming system used for data and the iMac Pro at $20K that can crunch data like a famished piranha, you can wait and watch those results become reality before you finish your first coffee.

In addition, as soon as Apple makes the second step we will see them getting a decent chunk out of the Business Intelligence, forecasting and even the Enterprise sized dash boarding market, because with 18 cores, you can do it all at the same time. This is not the first, not the second and not even the third case where Microsoft dropped the ball. They went wide, and forgot about the core business needs (or so you would think). Yet, the question remains how many can or are willing to pay the $20K question, even as we know that there are options in the $8K and $13K setting in that same device, because there is room for change between 8 and 18 cores. It seems that for a lot the system is overpriced, we can all agree on that, but for those who are in the segregated markets, it is not about a new player, it is more that the windows driven PC market, they just lost a massively sized niche, it is the price we pay for catering to the largest denominator, the question then becomes: ‘Can Microsoft and will it hit back?

Time will tell, what is the case is that the waiting is over and 2018 could potentially see a massive shift of high end users towards Apple, a change we have not seen for the longest of times, I wish them well, because in the end many average users will benefit from such a shift as well, because in confusion there is profit and Microsoft is optionally becoming one of the larger confused places in 2018.

So why should I care?

Apple started something that will soon be copied by A-brands like ASUS. It will remain a PC, but they now see that the high end users they do have, they want to keep it. This makes it almost exactly 20 years after I learned this lesson the hard way. There was a Dutch sales shop who had a special deal, the deal was the Apple Performa, maxed (as far as that was possible) for almost $2750, I was happy as hell. My apple (My first 100% owned by my own self) and I had a great time. I never regretted buying it, but there was a snatch, 3 months later that same shop had the Power-Mac on special, the difference was well over 300%, the difference $1000 (a lot in those days), but still 300% more power and new software that would no longer support the Performa system and older models, a system outdated before the warranty ran out. We are about to see a similar shift. We know multi-core systems, they have been around for a while, yet the shift is larger, so as we see new technologies, new solutions pushed on us whilst the actual current solutions as still broken to some extent, we will be pushed into a choice, will we follow the core or fall behind? Even as we see the marketing babble now on how it is upper tier, merely for the 1% and we feel to be in agreement (for now) we see a first wave of segregation. As the followers will emphasise on the high end computers, we will see a new wave of segregation.

And? So what? I do not want to pay too much!

This is the valid response for many players, for many users, they do not have the needs IT people have, many merely see the need they have now and that is not wrong, not in this life as the economy is not coming back the way it needs to be. Yet two elements are taking over, the first is Microsoft, we can’t get around them for the most and as e-commerce and corporate industry is moving, shows to be both their option and their flaw. As we see more push where 90% of the Fortune 500 is now stated to be on the Microsoft cloud, we see the need for multi-core systems more and more. Even as some might remember the quote form early 2017 “Find out why it’s the most complete #cloud solution“, the rest is only now catching on that the Azure cloud is dangerous in several ways. Chip Childers, the fearless leader of the Cloud Foundry Foundation gives us “We are shifting to a “cloud-first” world more and more. Even with private data centres, the use of cloud technologies is changing how we think about infrastructure, application platforms and software development“, yet the danger is also there yet not mentioned. This danger is slowly pushed onto us through the change that the US gave yesterday. As Net Neutrality is being abolished, there is a real danger that certain blocks could grow on a global scale. So as we see trillions in market value shift, how long until other players will set up barriers and set minimum business needs and cater to them above all others?

Core Cloud Solutions become a danger, because it forces the contemplation that it is no longer about bandwidth and strength of your internet connection, the high end of business is moving back to the Mainframe standards that existed strongly before the 90’s started. It will be about CPU Time Used. So at that point it is not about the amount of data, but the reception of CPU channels, as such the user with a multi core system will have a massive advantage, and the rest is segregated back towards second level, decreased options. It does not change consumer use of places like Netflix, but when you require the power of your value to be in Azure, the multicore systems are the key to enable you and disable connection huggers and non-revenue connected users, consumers at a price for limited access.

This is the future we push for; it is not created by or instigated by Apple. It merely sees what will be needed in 4 years when 5G is the foundation of our lives. I saw part of this as I designed part of a solution that will solve the NHS issues in the UK, the Netherlands, Sweden and Germany, but I was slow to see that the lesson I was handed the hard way in 1997 is also around the corner. As Netflix and others (Google in part) is regressing towards the mean in some of their services and options that they will offer the global audience at large. The outliers (Google, Amazon, IBM, Microsoft and SAP) will soon be facilitators to the Expression Dataset of the next model of usage that comes. There will be a shift and it will go on until 2022, as 5G will enable some players like NTT Data and Tata Communications to get an elevated seat, perhaps even a seat at that very table.

They will decide over the coming years that there is a shift and as people decide the level of access that they are getting they will soon learn that they are not merely deciding for themselves, because the earlier their children get full access, the more options they will get beyond their tertiary education. Soon we will learn that access is almost everything, but we will not learn that lesson the way we thought we would. Even I have no idea how this will play out, but such a shift beyond the iteration IT world we see now is exciting beyond belief. I hope I will end up being part of that world, I have been part of the IT/BI Industry since 1980 and I am about to see a new universe of skills unfold before my very eyes. I wonder how far I am able to get into that part, because these players will all need facilitation of services and most of them have been commission driven for too long, meaning that they are already falling behind.

What a world we are about to need to live in!

 

Advertisements

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The Good, the Bad, and North Korea

This article is late in the making. There is the need to be first, but is that enough? At times it is more important to be well informed. So let’s start with the good. The good is that if there is a nuclear blast, North Korea need not worry. The game maker Bethesda made a management simulator called Fallout Shelter. You can, on your mobile device manage a fallout shelter, get the goods of food, energy and water. Manage how the people procreate and who gets to procreate. Fight off invaders and grow the population to 200 people, so with two of these shelters, North Korea has a viable solution to not become extinct. The bad news is that North Korea has almost no smart phones, so there is not a device around to actively grow the surviving community. Yes, this matter, and it is important to you. You see the Dutch had some kind of a media tour around 2012. There were no camera’s allowed, still the images came through, because as the cameras were locked away, the military and the official escorts were seemingly unaware that every journalist had a mobile with the ability to film. The escorting soldier had never seen a smartphone before in his life. So a year later, we get the ‘fake’ news in the Dutch Newspaper (at https://www.volkskrant.nl/buitenland/noord-korea-beweert-smartphone-te-hebben-ontwikkeld-niemand-gelooft-het~a3493503/) that North Korea finished ‘their’ own local smartphones. This is important as it shows just how backwards North Korea is in certain matters.

The quote “Zuid-Koreaanse computerexperts menen dat hun noorderbuur genoeg van software weet om cyberaanvallen uit te voeren, zoals die op banken en overheidswebsites van eerder dit jaar. Maar de ontwikkeling van hardware staat in Noord-Korea nog in de kinderschoenen“, stating: “South Korean computer experts believe that their northern neighbour knows enough of software to instigate cyber-attacks, such as those on banks and Government websites earlier this year. But the development of hardware in North Korea remains in its infancy“. I believe this to be a half truth. I believe that China facilitates to some degree, but it is keeping its market on a short leash. North Korea remains behind on several fronts and that would show in other fields too.

This is how the two different parts unite. You see, even as America had its hydrogen bomb in 1952, it did not get there in easy steps and it had a massive level of support on several fronts as well as the brightest minds that this plane had to offer. The same could be said for Russia at the time. The History channel of all places gives us “Opponents of development of the hydrogen bomb included J. Robert Oppenheimer, one of the fathers of the atomic bomb. He and others argued that little would be accomplished except the speeding up of the arms race, since it was assumed that the Soviets would quickly follow suit. The opponents were correct in their assumptions. The Soviet Union exploded a thermonuclear device the following year and by the late 1970s, seven nations had constructed hydrogen bombs“, so we get two parts here. The fact that the evolution was theoretically set to 7-10 years, the actual device would not come until much later. The other players who had nowhere near the academic and engineering capacity would follow close to 18 years later. That is merely an explosion, something North Korea is claiming to consider. With the quote “North Korea’s Foreign Minister has said the country may test a hydrogen bomb in the Pacific“, we need to realise that the operative word is ‘may‘. Even then there will be a large time lapse coming. Now, I am not trying to lull you into sleep. The fact that North Korea is making these steps is alarming to a much larger scale than most realise. Even if it fails, there is a chance that, because of failed safety standards, a setting that is often alien to North Korea, wherever this radiation is, it can impact the biological environment beyond repair; it is in that frame that Japan is for now likely the only one that needs to be truly worried.

All this still links together. You see, the issue is not firing a long range rocket; it is keeping it on track and aiming it precisely. Just like the thousands of Hamas rockets fired on Israel with a misfiring percentage of 99.92% (roughly), North Korea faces that same part in a much larger setting. You see ABC touched on this in July, but never gave all the goods (at http://www.abc.net.au/news/2017-07-06/north-korea-missile-why-it-is-so-difficult-to-intercept-an-icbm/8684444). Here we see: “The first and most prominent is Terminal High Altitude Area Defence, or THAAD, which the US has deployed in South Korea. THAAD is designed to shoot down ballistic missiles in the terminal phase of flight — that is, as the ballistic missile is re-entering the atmosphere to strike its target. The second relevant system is the Patriot PAC-3, which is designed to provide late terminal phase interception, that is, after the missile has re-entered the atmosphere. It is deployed by US forces operating in the region, as well as Japan.” You see, that is when everything is in a 100% setting, but we forget, North Korea is not there. You see, one of the most basic parts here is shown to undergrads at MIT. Here we see Richard C. Booton Jr. and Simon Ramo, executives at TRW Inc., which would grow and make military boy scouts like Northrop Grumman and the Goodrich Corporation. So these people are in the know and they give us: “Today all major space and military development programs recognize systems engineering to be a principal project task. An example of a recent large space system is the development of the tracking and data relay satellite system (TDRSS) for NASA. The effort (at TRW) involved approximately 250 highly experienced systems engineers. The majority possessed communications systems engineering backgrounds, but the range of expertise included software architecture, mechanical engineering, automatic controls design, and design for such specialized performance characteristics as stated reliability“, that is the name of the game and North Korea lacks the skill, the numbers and the evolved need for shielded electronic guidance. In the oldest days it would have been done with 10 engineers, but as the systems become more complex, and their essential need for accuracy required evolution, all items lacking in North Korea. By the way, I will add the paper at the end, so you can read all by yourself what other component(s) North Korea is currently missing out on. All this is still an issue, because even as we see that there is potentially no danger to the USA and Australia, that safety cannot be given to China and Japan, because even if Japan is hit straight on, it will affect and optionally collapse part of the Chinese economy, because when the Sea of Japan, or the Yellow sea becomes the ‘Glowing Sea’, you better believe that the price of food will go up by 1000% and clean water will be the reason to go to war over. North Korea no matter how stupid they are, they are a threat. When we realise just how many issues North Korea faces, we see that all the testosterone imagery from North Korea is basically sabre rattling and because they have no sabres, they will try to mimic it with can openers. The realisation of all this is hitting you now and as you realise that America is the only player that is an actual threat, we need to see the danger for what it is, it is a David and Goliath game where the US is the big guy and North Korea forgot their sling, so it becomes a one sided upcoming slaughter. It is, as I see it diplomacy in its most dangerously failed stage. North Korea rants on and on and at some point, the US will have no option left but to strike back. So in all this, let’s take one more look, so that you get the idea even better.

I got this photo from a CNN source, so the actual age was unknown, yet look at the background, the sheer antiquity that this desktop system represents. In a place where the President of North Korea should be surrounded by high end technology, we see a system that seems to look like an antiquated Lenovo system, unable to properly play games from the previous gaming generation, and that is their high technology?

So here we see the elements come together. Whether you see Kim Jong-un as a threat, he could be an actual threat to South Korea, Japan, China and Russia. You see, even if everything goes right, there is a larger chance that the missile gets a technology issue and it will prematurely crash, I see that chance at 90%, so even as it was fired at the US, the only ones in true peril are Japan, South Korea, Russia and last China, who only gets the brunt if the trajectory changes by a lot. After which the missile could accidently go off. That is how I see it, whatever hydrogen bomb element they think they have, it requires a lot of luck for North Korea to go off, because they lack the engineering capacity, the skills and the knowhow and that is perhaps even more scary than anything else, because it would change marine biology as well as the aftermath as it all wastes into the Pacific ocean for decades to come. So when you consider the impact that sea life had because of Hiroshima and Nagasaki for the longest time, now consider the aftermath of a bomb hundreds of times more powerful by a megalomaniac who has no regards for safety procedures. That is the actual dangers we face and the only issue is that acting up against him might actually be more dangerous, we are all caught between the bomb and an irradiated place. Not a good time to be living the dream, because it might just turn into a nightmare.

Here is the paper I mentioned earlier: booten-ramo

Leave a comment

Filed under IT, Military, Politics, Science

A legislative system shock

Today the Guardian brings us the news regarding the new legislation on personal data. The interesting starts with the image of Google and not Microsoft, which is a first item in all this. I will get back to this. The info we get with ‘New legislation will give people right to force online traders and social media to delete personal data and will comply with EU data protection‘ is actually something of a joke, but I will get back to that too. You see, the quote it is the caption with the image that should have been at the top of all this. With “New legislation will be even tougher than the ‘right to be forgotten’ allowing people to ask search engines to take down links to news items about their lives“, we get to ask the question who the protection is actually for?

the newspapers gives us this: “However, the measures appear to have been toughened since then, as the legislation will give people the right to have all their personal data deleted by companies, not just social media content relating to the time before they turned 18“, yet the reality is that this merely enables new facilitation for data providers to have a backup in a third party sense of data. As I personally see it, the people in all this will merely be chasing a phantom wave.

We see the self-assured Matt Hancock standing there in the image and in all this; I see no reason to claim that these laws will be the most robust set of data laws at all. They might be more pronounced, yet in all this, I question how facilitation is dealt with. With “Elizabeth Denham, the information commissioner, said data handlers would be made more accountable for the data “with the priority on personal privacy rights” under the new laws“, you see the viewer will always respond in the aftermath, meaning that the data is already created.

We can laugh at the statement “The definition of “personal data” will also be expanded to include IP addresses, internet cookies and DNA, while there will also be new criminal offences to stop companies intentionally or recklessly allowing people to be identified from anonymous personal data“, it is laughable because it merely opens up venues for data farms in the US and Asia, whilst diminishing the value of UK and European data farms. The mention of ‘include IP addresses‘ is funny as the bulk of the people on the internet are all on dynamic IP addresses. It is a protection for large corporations that are on static addresses. the mention of ‘stop companies intentionally or recklessly allowing people to be identified from anonymous personal data‘ is an issue as intent must be shown and proven, recklessly is something that needs to be proven as well and not on the balance of it, but beyond all reasonable doubt, so good luck with that idea!

As I read “The main aim of the legislation will be to ensure that data can continue to flow freely between the UK and EU countries after Brexit, when Britain will be classed as a third-party country. Under the EU’s data protection framework, personal data can only be transferred to a third country where an adequate level of protection is guaranteed“, is this another twist in anti-Brexit? You see none of this shows a clear ‘adequate level of protection‘, which tends to stem from technology, not from legislation, the fact that all this legislation is all about ‘after the event‘ gives rise to all this. So as I see it, the gem is at the end, when we see “the EU committee of the House of Lords has warned that there will need to be transitional arrangements covering personal information to secure uninterrupted flows of data“, it makes me wonder what those ‘actual transitional arrangements‘ are and how come that the new legislation is covering policy on this.

You see, to dig a little deeper we need to look at Nielsen. There was an article last year (at http://www.nielsen.com/au/en/insights/news/2016/uncommon-sense-the-big-data-warehouse.html), here we see: “just as it reached maturity, the enterprise data warehouse died, laid low by a combination of big data and the cloud“, you might not realise this, but it is actually a little more important than most realise. It is partially seen in the statement “Enterprise decision-making is increasingly reliant on data from outside the enterprise: both from traditional partners and “born in the cloud” companies, such as Twitter and Facebook, as well as brokers of cloud-hosted utility datasets, such as weather and econometrics. Meanwhile, businesses are migrating their own internal systems and data to cloud services“.

You see, the actual dangers in all that personal data, is not the ‘privacy’ part, it is the utilities in our daily lives that are under attack. Insurances, health protection, they are all set to premiums and econometrics. These data farms are all about finding the right margins and the more they know, the less you get to work with and they (read: their data) will happily move to where ever the cloud takes them. In all this, the strong legislation merely transports data. You see the cloud has transformed data in one other way, the part Cisco could not cover. The cloud has the ability to move and work with ‘data in motion’; a concept that legislation has no way of coping with. The power (read: 8 figure value of a data utility) is about being able to do that and the parties needing that data and personalised are willing to pay through the nose for it, it is the holy grail of any secure cloud environment. I was actually relieved that it was not merely me looking at that part; another blog (at https://digitalguardian.com/blog/data-protection-data-in-transit-vs-data-at-rest) gives us the story from Nate Lord. He gives us a few definitions that are really nice to read, yet the part that he did not touch on to the degree I hoped for is that the new grail, the analyses of data in transit (read: in motion) is cutting edge application, it is what the pentagon wants, it is what the industry wants and it is what the facilitators want. It is a different approach to real time analyses, and with analyses in transit those people get an edge, an edge we all want.

Let’s give you another clear example that shows the value (and the futility of legislation). Traders get profit by being the first, which is the start of real wealth. So whoever has the fastest connection is the one getting the cream of the trade, which is why trade houses pay millions upon millions to get the best of the best. The difference between 5ms and 3ms results in billions of profit. Everyone in that industry knows that. So every firm has a Bloomberg terminal (at $27,000 per terminal), now consider the option that they could get you that data a millisecond faster and the automated scripts could therefor beat the wave of sales, giving them a much better price, how much are they willing to pay suddenly? This is a different level of armistice, it is weaponised data. The issue is not merely the speed; it is the cutting edge of being able to do it at all.

So how does this relate?

I am taking you back to the quote “it would amount to a “right to be forgotten” by companies, which will no longer be able to get limitless use of people’s data simply through default “tick boxes” online” as well as “the legislation will give people the right to have all their personal data deleted by companies“. The issue here is not to be forgotten, or to be deleted. It is about the data not being stored and data in motion is not stored, which now shows the futility of the legislation to some extent. You might think that this is BS, consider the quote by IBM (at https://www.ibm.com/developerworks/community/blogs/5things/entry/5_things_to_know_about_big_data_in_motion?lang=en), it comes from 2013, IBM was already looking at matters in different areas close to 5 years ago, as were all the large players like Google and Microsoft. With: “data in motion is the process of analysing data on the fly without storing it. Some big data sources feed data unceasingly in real time. Systems to analyse this data include IBM Streams “, here we get part of it. Now consider: “IBM Streams is installed on nearly every continent in the world. Here are just a few of the locations of IBM Streams, and more are being added each year“. In 2010 there were 90 streams on 6 continents, and IBM stream is not the only solution. As you read that IBM article, you also read that Real-time Analytic Processing (RTAP) is a real thing, it already was then and the legislation that we now read about does not take care of this form of data processing, what the legislation does in my view is not give you any protection, it merely limits the players in the field. It only lets the really big boys play with your details. So when you see the reference to the Bloomberg terminal, do you actually think that you are not part in the data, or ever forgotten? EVERY large newspaper and news outlet would be willing to pay well over $127,000 a year to get that data on their monitors. Let’s call them Reuter Analytic Systems (read: my speculated name for it), which gets them a true representation of all personalised analytical and reportable data in motion. So when they type the name they need, they will get every detail. In this, the events that were given 3 weeks ago with the ITPRO side (at http://www.itpro.co.uk/strategy/29082/ecj-may-extend-right-to-be-forgotten-ruling-outside-the-eu) sounds nice, yet the quote “Now, as reported by the Guardian, the ECJ will be asked to be more specific with its initial ruling and state whether sites have to delete links only in the country that requests it, or whether it’s in the EU or globally” sounds like it is the real deal, yet this is about data in rest, the links are all at rest, so the data itself will remain and as soon as HTML6 comes we might see the beginning of the change. There have been requests on that with “This is the single-page app web design pattern. Everyone’s into it because the responsiveness is so much better than loading a full page – 10-50ms with a clean API load vs. 300-1500ms for a full HTML page load. My goal would be a high-speed responsive web experience without having to load JavaScript“, as well as “having the browser internally load the data into a new data structure, and the browser then replaces DOM elements with whatever data that was loaded as needed“, it is not mere speed, it would allow for dynamic data (data in motion) to be shown. So when I read ‘UK citizens to get more rights over personal data under new laws‘, I just laughed. The article is 15 hours old and I considered instantly the issues I shown you today. I will have to wait until the legislation is released, yet I am willing to bet a quality bottle of XO Cognac that data in motion is not part of this, better stated, it will be about stored data. All this whilst the new data norm is still shifting and with G5 mobile technologies, stored data might actually phase out to be a much smaller dimension of data. The larger players knew this and have been preparing for this for several years now. This is also an initial new need for the AI that Google wants desperately, because such a system could ascertain and give weight to all data in motion, something IBM is currently not able to do to the extent they need to.

The system is about to get shocked into a largely new format, that has always been the case with evolution. It is just that actual data evolution is a rare thing. It merely shows to me how much legislation is behind on all this, perhaps I will be proven wrong after the summer recess. It would be a really interesting surprise if that were the case, but I doubt that will happen. You can see (read about that) for yourself after the recess.

I will follow up on this, whether I was right or wrong!

I’ll let you speculate which of the two I am, as history has proven me right on technology matters every single time (a small final statement to boost my own ego).

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

When the trust is gone

In an age where we see an abundance of political issues, an overgrowing need to sort things out, the news that was given visibility by the Guardian is the one that scared and scarred me the most. With ‘Lack of trust in health department could derail blood contamination inquiry‘ (at https://www.theguardian.com/society/2017/jul/19/lack-of-trust-in-health-department-could-derail-blood-contamination-inquiry), we need to hold in the first stage a very different sitting in the House of Lords. You see, the issues (as I am about to explain them), did not start overnight. In this I am implying that a sitting with in the dock Jeremy Hunt, Andrew Lansley, Andy Burham and Alan Johnson is required. This is an issue that has grown from both sides of the Isle and as such there needs to be a grilling where certain people are likely to get burned for sure. How bad? That needs to be ascertained and it needs to be done as per immediate. When you see “The contamination took place in the 1970s and 80s, and the government started paying those affected more than 25 years ago” the UK is about to get a fallout of a very different nature. We agree that this is the term that was with Richard Crossman, Sir Keith Joseph, Barbara Castle, David Ennals, Patrick Jenkin, Norman Fowler, and John Moore. Yet in that instance we need to realise that this was in an age that was pre computers, pre certain data considerations and a whole league of other measures that are common place at this very instance. I remember how I aided departments with an automated document system, relying on 5.25″ floppy’s, with the capability that was less than Wordstar or PC-Write had ever offered. And none of those systems had any reliable data storage options.

The System/36 was flexible and powerful for its time:

  • It allowed 80 monitors (see below for IBM’s description of a monitor) and printers to be connected. All users could access the system’s hard drive or any printer.
  • It provided password security and resource security, allowing control over who was allowed to access any program or file.
  • Devices could be as far as a mile from the system unit.
  • Users could dial into a System/36 from anywhere in the world and get a 9600 baud connection (which was very fast in the 1980s) and very responsive for connections which used only screen text and no graphics.
  • It allowed the creation of databases of very large size. It supported up to about 8 million records, and the largest 5360 with four hard drives in its extended cabinet could hold 1.453 gigabytes.
  • The S/36 was regarded as “bulletproof” for its ability to run many months between reboots (IPLs).

Now, why am I going to this specific system, as the precise issues were not yet known? You see in those days, any serious level of data competency was pretty much limited to IBM, at that time Hewlett Packard was not yet to the level it became 4 years later and the Digital Equipment Corporation (DEC) who revolutionised systems with VAX/VMS and it became the foundation, or better stated true relational database foundations were added through Oracle Rdb (1984), which would actually revolutionise levels of data collection.

Now, we get two separate quotes (not from the article) “Dr Jeremy Bradshaw Smith at Ottery St Mary health centre, which, in 1975, became the first paperless computerised general practice“, as well as “It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use“, the second one comes from the Oracle Rdb SQL Reference manual. The second part seems a bit of a stretch; consider the original setting of this. When we see Oracle’s setting of data integrity, consider the elements given (over time) that are now commonplace.

System and object privileges control access to application tables and system commands, so that only authorized users can change data.

  • Referential integrity is the ability to maintain valid relationships between values in the database, according to rules that have been defined.
  • A database must be protected against viruses designed to corrupt the data.

I left one element out for the mere logical reasons.

now, in those days, the hierarchy of supervisors and system owners was nowhere near what it is now (and often nowhere to be seen), referential integrity was a mere concept and data viruses were mostly academic, that is until we get a small presentation by Ralf Burger in 1986. It was in the days of the Chaos Computer Club and my trusty CBM-64.

These elements are to show you that data integrity existed in academic purposes, yet the designers who were in their data infancy often enough had no real concept of rollback data events, some would only be designed too long later, and in all this, the application of databases to the extent that was needed. It would not be until 1982 when dBase II came to the PC market from the founding fathers of what would later be known as Ashton-Tate, George Tate and Hal Lashlee would create a wave that would get us dBase III and with the creation of Clipper by the Nantucket Corporation, which would give a massive rise to database creations as well as the growth of data products that had never been seen before, as well as being the player that in the end propelled data quality towards the state it is nowadays. In this product databases did not just grow with the network abilities within this product nearly any final year IT person could have its portfolio of clients all with custom based products all data based. Within 2-3 years (which gets us to 1989), a whole league of data quality, data cleaning and data integrity base issues would surface for millions of places, all requiring solutions. It is my personal conviction that this was the point where data became adult, where data cleaning, data rollback as well as data integrity checks became actual issues that were seriously dealt with. So, here in 1989 we are finally confronted with the adult data issues that for the longest of times were only correctly understood by more than a few niche people who were often enough disregarded (I know that for certain because I was one of them).

So the essential events that could have prevented only to some degree the events we see in the Guardian with “survivors initially welcomed the announcement, while expressing frustration that the decades-long wait for answers had been too long. The contamination took place in the 1970s and 80s“, certain elements would not come into existence until a decade later.

So when we see “Liz Carroll, chief executive of the Haemophilia Society, wrote to May on Wednesday saying the department must not be involved in setting the remit and powers of an inquiry investigating its ministers and officials. She also highlighted the fact that key campaigners and individuals affected by the scandal had not been invited to the meeting“, I am not debating or opposing her in what could be a valid approach, I am merely stating that to comprehend the issues, the House of Lords needs to take the pulse of events and the taken steps forward from the Ministers who have been involved in the last 10 years.

When we see “We and our members universally reject meeting with the Department of Health as they are an implicated party. We do not believe that the DH should be allowed to direct or have any involvement into an investigation into themselves, other than giving evidence. The handling of this inquiry must be immediately transferred elsewhere“, we see a valid argument given, yet when we would receive testimonies from people, like the ministers in those days, how many would be aware and comprehend the data issues that were not even decently comprehended in those days? Because these data issues are clearly part of all of these events, they will become clear towards the end of the article.

Now, be aware, I am not giving some kind of a free pass, or give rise that those who got the bad blood should be trivialised or ignored or even set to a side track, I am merely calling for a good and clear path that allows for complete comprehension and for the subsequent need of actual prevention. You see, what happens today might be better, yet can we prevent this from ever happening again? In this I have to make a side step to a non-journalistic source, we see (at https://www.factor8scandal.uk/about-factor/), “It is often misreported that these treatments were “Blood Transfusions”. Not True. Factor was a processed pharmaceutical product (pictured)“, so when I see the Guardian making the same bloody mistake, as shown in the article, we see and should ask certain parties how they could remain in that same stance of utter criminal negligence (as I personally see it), but giving rise to intentional misrepresentation. When we see the quote (source: the Express) “Now, in the face of overwhelming evidence presented by Andy Burnham last month, Theresa May has still not ordered an inquiry into the culture, practice and ethics of the Department of Health in dealing with this human tragedy” with the added realisation that we have to face that the actual culprit was not merely data, yet the existence of the cause through Factor VIII is not even mentioned, the Guardian steered clear via the quote “A recent parliamentary report found around 7,500 patients were infected by imported blood products from commercial organisations in the US” and in addition the quote “The UK Public Health Minister, Caroline Flint, has said: “We are aware that during the 1970s and 80s blood products were sourced from US prisoners” and the UK Haemophilia Society has called for a Public Inquiry. The UK Government maintains that the Government of the day had acted in good faith and without the blood products many patients would have died. In a letter to Lord Jenkin of Roding the Chief Executive of the National Health Service (NHS) informed Lord Jenkin that most files on contaminated NHS blood products which infected people with HIV and hepatitis C had unfortunately been destroyed ‘in error’. Fortunately, copies that were taken by legal entities in the UK at the time of previous litigation may mean the documentation can be retrieved and consequently assessed“, the sources the Express and the New York Times, we see for example the quote “Cutter Biological, introduced its safer medicine in late February 1984 as evidence mounted that the earlier version was infecting hemophiliacs with H.I.V. Yet for over a year, the company continued to sell the old medicine overseas, prompting a United States regulator to accuse Cutter of breaking its promise to stop selling the product” with the additional “Cutter officials were trying to avoid being stuck with large stores of a product that was proving increasingly unmarketable in the United States and Europe“, so how often did we see the mention of ‘Cutter Biological‘ (or Bayer pharmaceuticals for that matter)?

In the entire Arkansas Prison part we see that there are connections to cases of criminal negligence in Canada 2006 (where Canadian Red Cross fell on their sword), Japan 2007 as well as the visibility of the entire issue at Slamdance 2005, so as we see the rise of inquiries, how many have truly investigated the links between these people and how the connection to Bayer pharmaceuticals kept them out of harm’s way for the longest of times? How many people at Cutter Biological have not merely been investigated, but also indicted for murder? When we get ‘trying to avoid being stuck with large stores of a non-sellable product‘ we get the proven issue of intent. Because there are no recall and destroy actions, were there?

Even as we see a batch of sources giving us parts in this year, the entire visibility from 2005-2017 shows that the media has given no, or at best dubious visibility in all this, even yesterday’s article at the Guardian shows the continuation of bad visibility with the blood packs. So when we look (at http://www.kpbs.org/news/2011/aug/04/bad-blood-cautionary-tale/), and see the August 2011 part with “This “miracle” product was considered so beneficial that it was approved by the FDA despite known risks of viral contamination, including the near-certainty of infection with hepatitis“, we wonder how the wonder drug got to be or remain on the market. Now, there is a fair defence that some issues would be unknown or even untested to some degree, yet the ‘the near-certainty of infection with hepatitis‘ should give rise to all kinds of questions and it is not the first time that the FDA is seen to approve bad medication, which gives rise to the question why they are allowed to be the cartel of approval as big bucks is the gateway through their door. When we consider the additional quote of “By the time the medication was pulled from the market in 1985, 10,000 hemophiliacs had been infected with HIV, and 15,000 with hepatitis C; causing the worst medical disaster in U.S. history“, how come that it took 6 years for this to get decent amounts of traction within the UK government.

What happened to all that data?

You see, this is not merely about the events, I believe that if any old systems (a very unlikely reality) could be retrieved, how long would it take for digital forensics to find in the erased (not overwritten) records to show that certain matters could have been found in these very early records? Especially when we consider the infancy of data integrity and data cleaning, what other evidence could have surfaced? In all this, no matter how we dig in places like the BBC and other places, we see a massive lack of visibility on Bayer Pharmaceuticals. So when we look (at http://pharma.bayer.com/en/innovation-partnering/research-focus/hemophilia/), we might accept that the product has been corrected, yet their own site gives us “the missing clotting factor is replaced by a ‘recombinant factor’, which is manufactured using genetically modified mammalian cells. When administered intravenously, the recombinant factor helps to stop acute bleeding at an early stage or may prevent it altogether by regular prophylaxis. The recombinant factor VIII developed by Bayer for treating hemophilia A was one of the first products of its kind. It was launched in 1993“, so was this solution based on the evolution of getting thousands of people killed? the sideline “Since the mid-1970s Bayer has engaged in research in haematology focusing its efforts on developing new treatment options for the therapy of haemophilia A (factor VIII deficiency)“, so in all this, whether valid or not (depending on the link between Bayer Pharmaceuticals UK and Cutter Biological. the mere visibility on these two missing in all the mentions, is a matter of additional questions, especially as Bayer became the owner of it all between 1974 and 1978, which puts them clearly in the required crosshairs of certain activities like depleting bad medication stockpiles. Again, not too much being shown in the several news articles I was reading. When we see the Independent, we see ‘Health Secretary Jeremy Hunt to meet victims’ families before form of inquiry is decided‘, in this case it seems a little far-fetched that the presentation by Andy Burham (as given in the Express) would not have been enough to give an immediate green light to all this. Even as the independent is hiding behind blood bags as well, they do give the caption of Factor VIII with it, yet we see no mention of Bayer or Cutter, yet there is a mention of ‘prisoners‘ and the fact that their blood was paid for, yet no mention of the events in Canada and Japan, two instances that gives rise to an immediate and essential need for an inquiry.

In all this, we need to realise that no matter how deep the inquiry goes, the amount of evidence that could have been wiped or set asunder from the eyes of the people by the administrative gods of Information Technology as it was between 1975 and 1989, there is a dangerous situation. One that came unwillingly through the evolution of data systems, one that seems to be the intent of the reporting media as we see the utter absence of Bayer Pharmaceuticals in all of this, whilst there is a growing pool of evidence through documentaries, ad other sources that seem to lose visibility as the media is growing a view of presentations that are skating on the subject, yet until the inquiry becomes an official part we see a lot less than the people are entitled to, so is that another instance of the ethical chapters of the Leveson inquiry? And when this inquiry becomes an actuality, what questions will we see absent or sidelined?

All this gets me back to the Guardian article as we see “The threat to the inquiry comes only a week after May ordered a full investigation into how contaminated blood transfusions infected thousands of people with hepatitis C and HIV“, so how about the events from 2005 onwards? Were they mere pharmaceutical chopped liver? In the linked ‘Theresa May orders contaminated blood scandal inquiry‘ article there was no mention of Factor VIII, Bayer (pharmaceuticals) or Cutter (biological). It seems that we need to give rise that ethical issues have been trampled on, so a mention of “a criminal cover-up on an industrial scale” is not a mere indication; it is an almost given certainty. In all that, as the inquiry will get traction, I wonder how both the current and past governments will be adamant to avoid skating into certain realms of the events (like naming the commercial players), and when we realise this, will there be any justice to the victims, especially when the data systems of those days have been out of time for some time and the legislation on legacy data is pretty much non-existent. When the end balance is given, in (as I personally see it) a requirement of considering to replace whatever Bayer Pharmaceuticals is supplying the UK NHS, I will wonder who will be required to fall on the virtual sword of non-accountability. The mere reason being that when we see (at http://www.annualreport2016.bayer.com/) that Bayer is approaching a revenue of 47 billion (€ 46,769M) in 2016, should there not be a consequence of the players ‘depleting unsellable stock‘ at the expense of thousands of lives? This is another matter that is interestingly absent from the entire UK press cycles. And this is not me just speculating, the sources give clear absence whilst the FDA reports show other levels of failing, it seems that some players forget that lots of data is now globally available which seems to fuel the mention of ‘criminal negligence‘.

So you have a nice day and when you see the next news cycle with bad blood, showing blood bags and making no mention of Factor VIII, or the pharmaceutical players clearly connected to all this, you just wonder who is doing the job for these journalists, because the data as it needed to be shown, was easily found in the most open of UK and US governmental places.

 

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Confirmation on Arrival

Last week, I gave you some of the views I had in ‘Google is fine, not fined‘ (at https://lawlordtobe.com/2017/06/28/google-is-fine-not-fined/). I stated “This is not on how good one or the other is, this is how valid the EU regulator findings were and so far, I have several questions in that regard. Now, I will be the last one keeping governments from getting large corporations to pay taxation, yet that part is set in the tax laws, not in EU-antitrust. As mentioned the searchers before, I wonder whether the EU regulators are facilitating for players who seem more and more clueless in a field of technology that is passing them by on the left and the right side of the highway called, the ‘Internet Of Things’“, 5 days later we see that my views were correct, again and again I have shown that looking behind the scenes is adamant to see the levels of misinformation and betrayal. Now in ‘To tackle Google’s power, regulators have to go after its ownership of data‘ (at https://www.theguardian.com/technology/2017/jul/01/google-european-commission-fine-search-engines) we now see: “The Google workshop at the Viva Technology show last month in Paris, which brought together players who shape the internet’s transformation“, this is what it always has been about. Who owns the data? Evgeny Morozov gives us a good story on what should be and what should not be, he pictures a possible upcoming form of feudalism, all drenched in data. It is no longer just about merely data and applicability; it is more and more about governments becoming obsolete. The EU is the first evidence in this. The EU is regarded as something that is on top of governments, yet that is not the case. It seems to be replacing them through orchestration. Mario Draghi is spending massive amounts of funds none of them have, yet in all this, yesterday we see “The European Central Bank has been dealt a heavy blow after inflation in June tumbled further below target, despite extreme measures from policymakers to stoke the economic measure” as well as “Unless price rises are stronger, ECB chief Mario Draghi has signaled that he is unlikely to scale back the mammoth levels of support for the economy“, so it is he and the ECB who are now setting the precedence of spending, printing money without any value behind supporting it. So is it ‘wealth distribution‘ or ‘wealth abolishment‘?

If we agree that this economy has failed, if we believe that this way of life is no more, when we accept that ¼th of this planets population is dead in roughly 25 years, what would come next? I would not presume to know that answer, yet can we imagine that if the dollar stops, we would need something else, in that case is data not a currency?

Now, I am perfectly happy to be utterly wrong here, I am also weirdly unsettled with the notion that our money is dwindling in value day after day. Now let’s get back to the ‘view’ of Morozov. When we see “Alphabet has so much data on each of us that any new incoming email adds very little additional context. There are, after all, diminishing returns to adding extra pieces of information to the billions it already possesses. Second, it’s evident that Alphabet, due to competition from Microsoft and Amazon, sees its paying corporate clients as critical to its future. And it’s prepared to use whatever advantages it has in the realm of data to differentiate itself from the pack – for example, by deploying its formidable AI to continue scanning the messages for viruses and malware“, we see more than just an adjustment in strategy.

Yet, I do not completely agree, you see data is only truly valued when it is up to date, so as data rolls over for new data new patterns will emerge. That would be an essential need for anything towards an AI, in this Data in motion and evolving data is essential to the core of any AI. and that timeline is soon becoming more adamant than some realise.

When we consider a quote from a 2006 article relating to a 2004 occurrence “Google published a new version of its PageRank patent, Method for node ranking in a linked database. The PageRank patent is filed under its namesake, Lawrence Page, and assigned to The Board of Trustees of the Leland Stanford Junior University; US Patent 7,058,628“, we should consider that the value it has will diminish (read: be reduced) in 2024 (for Google that is). There is of course another sight that this was ‘version 2‘, so others would be able to get closer with their own version. In 6 years as the Patent ends it will be open to all to use. No matter what some have, you only need to switch to Bing for a few days to see how straggling and incomplete it is. When you realise that Microsoft has no way at present to offer anything close to it, you get the first inside of how high the current Google value is and how much it scares governments and large corporations alike.

Now we get to the ‘ground works’ of it. From this we can see that Google seems to have been the only one working on an actual long term strategy, an event that others have stopped doing for a long time. All we see from Microsoft and IBM has been short term, masquerading as long term goals with 70% of those goals falling into disrepair and become obsolete through iteration (mainly to please the stakeholders they report to), is it such a surprise that I or anyone else would want to be part of an actual visionary company like Google? If Google truly pulls of the AI bit (it has enough data) we would see a parsing of intelligence (read: Business Intelligence) on a scale never witnessed before. It would be like watching a Google Marine holding a 9mm, whilst the opposite is the IBM Neanderthal (read: an exaggeration, the IBM would be the Cro-Magnon, not Neanderthal) holding a pointy stick named Watson. The extreme difference would be that large. In all this governments are no longer mentioned. They have diminished into local governments organising streams of data and facilitating consumers, mere civil servants in service of the people in their district. Above that, those levels of workers would become obsolete; the AI would set structures and set resources for billions. We went from governments, to organisations, we left fair opportunity behind and moved to ‘those who have and those who have not‘, and they are soon to be replaced for the ‘enablers and obstructers‘ and those who are the latter would fall into the shadows and face away.

Am I Crazy?

Well, that is always a fair argument, yet in all this, we have Greece as an initial example. Greece is possibly the only European nation with a civilisation that would soon become extinct twice. So as we see reports of lagging tourism revenue, on top of high regarded rises in GDP, rises we know that are not happening as the revenues are down by a larger margin (source: GTP), Greek revenue is down by 6.8 percent, which is massive! This gives stronger notions that the ‘beckoning of Greek bonds‘ is nothing more than a façade of a nation in its final moments of life. The fact that the ECB is not giving it any consideration for its trillion spending could also be regarded as evidence that the ECB has written off Greece. So tell me, when was the last time that nations were written off? Some of the press is now considering the works of former ‘rock star’ Yanis Varoufakis. Yet in all this, when did they actually change the landscape by investigating and prosecuting those who got Greece in the state it is in now? In the end, only the journalist releasing a list of millionaires pulling their money out of Greece, only he went to prison. So, as such, Greece is a first step of evidence that governments are no longer the powers they once claimed they were, and as less and less government officials are being held to account when it comes to larger financial transgressions is also a factor as to why the people of those nations no longer give them any regard.

The second view is in the UK, here we see ‘U.K. to End Half Century of Fishing Rights in Brexit Slap to EU‘, in this Bloomberg gives us “Prime Minister Theresa May will pull Britain out of the 1964 London convention that allows European fishing vessels to access waters as close as six to twelve nautical miles from the U.K. coastline“, in here we also see “This is an historic first step towards building a new domestic fishing policy as we leave the European Union — one which leads to a more competitive, profitable and sustainable industry for the whole of the U.K.“, which is only partially true. You see, Michael Gove has only a partial point and it is seen with: “Britain’s fishing industry is worth 775 million pounds and in 2015 it employed 10,162 full-time fishermen, down from about 17,000 in 1990. In almost three decades, fleet numbers dropped a third to 6,200 vessels and the catch has shrunk 30 percent“, the part that is not given is that from 1930 onwards engineering made massive strides in the field of ship engines, not large strides but massive ones. A ship, and its crew can catch fish, yet it is the engines that allow for the nets to be bigger and for the winches to be stronger to hoist those filled nets. In the ‘old’ days 2000 horsepower was a really powerful vessel, which amounted to 1.5 megawatts. Nowadays, these boats start at well over 300% of what was, so not only are the ships larger, can hold more fish and pull more weight, these ships are also getting more efficient in finding fish. I personally witnessed one of the first colour screen fish radars in 1979. In this field technology has moved far beyond this, almost 4 decades beyond this. If there is one part clearly shown, than it is the simple fact that technology changed industries, which has been a given for the better part of three generations. Not merely because we got better at what we do or how we do it, but as fishing results show that catches has been down by 30%, there is the optional element that there is less to catch because we got too efficient. It is a dwindling resource and fishing is merely the first industry to see the actual effects that lack of restraint is leading to.

So when we see a collapsed industry, can we blame governments? Who can we blame and is blame an actual option? In this, is there any validity in the fact that this part of government has surpassed its date of usefulness? Perhaps yes and there is equal consideration that this is not the case, yet the amount of consumers remains growing and as available resources go down we see the need for other solutions.

This is merely a first part. As we now move into the US and their 4th of July part, I will now look at other sides as well, sides we stopped considering. You see, there is opposition and it is growing. CNBC gives us one side to this with ‘Google Deep Mind patient data deal with UK health service illegal, watchdog says‘ (at http://www.cnbc.com/2017/07/03/google-deepmind-nhs-deal-health-data-illegal-ico-says.html), three points were raised. “A data sharing deal between Google’s Deep Mind and the U.K.’s National Health Service “failed to comply with data protection law“, the U.K.’s Information Commissioner’s Office (ICO) said“, “The deal between the two parties was aimed at developing a new app called Streams that helped monitor patients with acute kidney disease” as well as “the ICO said that patients were not notified correctly about how their data was being used“. Now, we can agree that an optional situation could exist. So does Elisabeth Denham have a point? For now let’s agree that she does, I would reckon that there has been a communicative transgression (this is how she plays it), yet is she being over formal or is she trying to slice the cake in a different way? The strongest statement is seen with “For example, a patient presenting at accident and emergency within the last five years to receive treatment or a person who engages with radiology services and who has had little or no prior engagement with the Trust would not reasonably expect their data to be accessible to a third party for the testing of a new mobile application, however positive the aims of that application may be.” OK, I can go along with that, we need certain settings for any level of privacy to be contained, yet…..there is no yet! The issue is not Google, the issue is that the data protection laws are there for a reason and now, it will hinder progress as well. As health services and especially UK NHS will need to rely on other means to stay afloat as costs are weighing it more and more to the bottom of an ocean of shortage of funding, the NHS will need to seek other solutions that will set an upward movement whilst the costs are slowly being worked on, it will take a long time and plenty of cash to sort it out, Google is merely one player who might solve the partial issue. Yet, the news could go in other directions too. Google is the largest, yet not the only player in town, as people seem to focus on marketing and presentations, we see IBM and to the smaller extent Microsoft and we all forget that Huawei is moving up in this field and it is gaining momentum. The cloud data centre in Peru is only a first step. It is only the arrogance of Americans that seem to think that this field is an American field. With Peru, India and China, Huawei is now active on a global scale. It has hired the best of the best that China has to offer and that is pretty formidable, There is no way that Huawei could catch up with Google in the short term, yet there services are now in a stage that they can equal IBM. As we see a race for what is now at times called the IoT landscape, we see the larger players fight for the acceptance of ‘their IoT standard’, and even as we see IBM mentioned, we see clearly that Google has a large advantage in achievements here and is heading the number of patents in this field, as Huawei is pretty much accepting the Google IoT standard, we see that they can focus on growth surpassing IBM, Qualcomm and Intel. In this Huawei will remain behind Apple in size and revenue, but as it is not in that field in a true competitive way Huawei might not consider Apple a goal, yet as they grow in India, Huawei could surpass the Tata group within 2 years.

So how does this matter?

As we see the steps (the not incorrect steps) of Elisabeth Denham, the acts as we saw in the Guardian on how regulators are trying to muzzle and limit the growth and activities of Google, how much influence do they have with Huawei? Even as we see that Huawei is privately owned, there have been a few articles on Ren Zhengfei and his connection to the Chinese military. It has spooked the US in the past, and consider how spooked they will get when Huawei grows their service levels in places like Greece, Spain and Italy? What will the EU state? Something like “your money smells, we will not accept it“. No! The EU is in such deep debt that they will invite Huawei like the prodigal son being welcomed home. So whilst everyone is bitching on how Google needs to be neutered, those people allow serious opponents and threats to Google’s data future to catch up. Huawei is doing so, one carrier at a time and they are doing it in a global way.

So as we see all kind of confirmations from media outlets all over the world, we seem to forget that they are not the only player in town as their growth in EU nations like Spain with a new android base Set Top Box (STB), Huawei just now becomes the competitor for Telefonica, Vodafone and Orange, implying that it now has a growing beach head into Europe with decent technology for a really affordable price. In a place where they all complain on how there is no economy, Huawei is more than a contender and it is growing business where others had mere presence and sustainable levels of revenue. It is merely a contained view on how the EU regulators seem to be fumbling the ball for long term growth, whilst handing opportunity to China (read: Huawei), who will be eagerly exporting to Europe the products they can.

In all this, CoA can be seen as a mere confirmation, a Course of Action by regulators, the Court of Appeal for Google, the Cost of Application for Huawei, the Coming of Age for Business Intelligence and the Center of Attention that Google is calling on themselves, whether intentional or not does not matter. We are left with the question whether at this point, the limelight is the best for them, we will leave that to Mr. Alphabet to decide.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

Google is fine, not fined

Yup, that’s me in denial. I know that there will be an appeal and it is time for the EU to actually get a grip on certain elements. In this matter I do speak with some expert authority as I have been part of the Google AdWords teams (not employed by Google though). The article ‘Google fined record €2.4bn by EU over search engine results‘ (at https://www.theguardian.com/business/2017/jun/27/google-braces-for-record-breaking-1bn-fine-from-eu) is a clear article. Daniel Boffey gives us the facts of the case, which is what we were supposed to read and get. Yet there is another side to it all and I think the people forgot just how terribly bad the others are. So when I read: “By artificially and illegally promoting its own price comparison service in searches, Google denied both its consumers real choice and rival firms the ability to compete on a level playing field, European regulators said“, so let’s start with this one and compare it to the mother of all ….. (read: Bing). First of all, there is no ‘Shopping’ tab. So there is that! If I go into the accursed browser of them (read: Internet Explorer), I get loads of unwanted results. In light of the last few days I had to enter ‘Grenfell .co.uk‘ a few times and guess what, I get “Visit Grenfell, Heart of Weddin Shire” in my top results, a .org.au site. The place is in NSW. Did I ask for that? Google gives a perfectly fine result. Now, I am not including the top ads as the advertisers can bid for whatever solution they want to capture. So let’s have a look at Bing ads. First I can choose to be visible in Aussie or Kiwi land, I can be visible globally or I can look at specific locations. So how do you appeal to the Australian and Scandinavian markets? Oh, and when you see the Bing system, it is flawed, yet it uses all the Google AdWords terms and phrases, callout extensions, snippets. They didn’t even bother to give them ‘original’ Bing names. And I still can’t see a way to target nations. So when we see a copy to this extent, we see the first evidence that Google made a system that a small time grocery shop like Microsoft cannot replicate at present. We can argue that the user interface is a little friendlier for some, but it is lacking in several ways and soon, when they are forced to overhaul, you get a new system to learn. So when the racer (Micro$oft) is coming in an Edsel and is up against a Jaguar XJ220, is it dominance by manipulating the race, or should the crying contender considered coming in an actual car?

Next, when I read ‘rival firms the ability to compete on a level playing field’, should the EU regulator consider that the other player does not have a shopping tab, the other players has a lacking advertisement management system that require massive overbidding to get there? Then we get the change history. I cannot see specifics like ‘pausing a campaign‘, this seems like a really important item to show, for the most ALL changes are important and the user is not shown several of them.

In the end, each provider will have its own system; it is just massively unsettling on how this system ‘mimics’ Google AdWords. Yet this is only the beginning.

The quote “The commission’s decision, following a seven-year probe into Google’s dominance in searches and smartphones, suggests the company may need to fundamentally rethink the way it operates. It is also now liable to face civil actions for damages by any person or business affected by its anti-competitive behaviour” really got me started. So, if we go back to 2010, we see the BBC (at http://news.bbc.co.uk/2/hi/business/8174763.stm) give us “Microsoft’s Bing search engine will power the Yahoo website and Yahoo will in turn become the advertising sales team for Microsoft’s online offering. Yahoo has been struggling to make profits in recent years. But last year it rebuffed several takeover bids from Microsoft in an attempt to go it alone” in addition there is “Microsoft boss Steve Ballmer said the 10-year deal would provide Microsoft’s Bing search engine with the necessary scale to compete“. Now he might well be the 22nd richest person on the planet, yet I wonder how he got there. We have known that the Yahoo system has been flawed for a long time, I was for a long time a Yahoo fan, I kept my account for the longest of times and even when Google was winning the race, I remained a loyal Yahoo fan. It got me what I needed. Yet over time (2006-2009) Yahoo kept on lagging more and more and the Tim Weber, the Business editor of the BBC News website stated it the clearest: “Yahoo is bowing to the inevitable. It simply had neither the resources nor the focus to win the technological arms race for search supremacy“. There is no shame here, Yahoo was not number one. So as we now realise that the Bing Search engine is running on a flawed chassis, how will that impact the consumer? Having a generic chassis is fine, yet you lose against the chassis of a Bentley Continental. Why? Because the designer was more specific with the Bentley, it was specific! As Bentley states: “By bringing the Speed models 10mm closer to the ground, Bentley’s chassis engineering team laid the foundation for an even sportier driving experience. To do so they changed the springs, dampers, anti-roll bars and suspension bushes. The result is improved body control under hard cornering, together with greater agility“, one element influences the other, and the same applies to online shopping, which gets us back to Steve Ballmer. His quote to the BBC “Through this agreement with Yahoo, we will create more innovation in search, better value for advertisers, and real consumer choice in a market currently dominated by a single company“, is that so? You see, in 2009 we already knew that non-Google algorithms were flawed. It wasn’t bad, there was the clear indication that the Google algorithms were much better, these algorithms were studies at universities around the world (also at the one I attended), the PageRank as Stanford University developed it was almost a generation ahead of the rest and when the others realised that presentations and boasts didn’t get the consumer anywhere (I attended a few of those too), they lost the race. The other players were all about the corporations and getting them online, getting the ‘path build’ so that the people will buy. Yet Google did exactly the opposite they wondered what the consumer needed and tended to that part, which won them the race and it got transferred into the Advertisement dimension as such. Here too we see the failing and the BBC published it in 2009. So the second quote “Microsoft and Yahoo know there’s so much more that search could be. This agreement gives us the scale and resources to create the future of search“, well that sounds nice and all marketed, yet, the shown truth was that at this point, their formula was flawed, Yahoo was losing traction and market share on a daily basis and what future? The Bing system currently looks like a ripped of copy (a not so great one) of the Google AdWords system, so how is there any consideration of ‘the ability to compete on a level playing field‘? In my view the three large players all had their own system and the numbers two and three were not able to keep up. So is this the case (as the EU regulator calls it) of “by promoting its own comparison shopping service in its search results, and demoting those of competitors“, or is there a clear growing case that the EU regulator does not comprehend that the algorithm is everything and the others never quite comprehended the extend of the superiority of the Google ranks? Is Google demoting others, or are the others negating elements that impact the conclusion? In car terms, if the Google car is the only one using Nitro, whilst the use of Nitro is perfectly legal (in this case). In addition, we see in 2015 ‘Microsoft loses exclusivity in shaken up Yahoo search deal‘ as well as “Microsoft will continue to provide search results for Yahoo, but in a reduced capacity. The two have renegotiated the 2009 agreement that saw Redmond become the exclusive provider of search results for a company that was once known for its own search services. This came amid speculation that Yahoo would try to end the agreement entirely“, so not only are they on a flawed system, they cannot agree on how to proceed as friends. So why would anyone continue on a limited system that does not go everywhere? In addition in April 2015 we learn “The other major change is that Microsoft will now become the exclusive salesforce for ads delivered by Microsoft’s Bing Ads platform, while Yahoo will do the same for its Gemini ads platform“, So Yahoo is cutting its sales team whilst Microsoft has to grow a new one, meaning that the customers have to deal with two systems now. In addition, they are now dealing with companies having to cope with a brain drain. Still, how related are these factors?

I personally see them as linked. One will influence the other, whilst changing the car chassis to something much faster will impact suspension and wheels, we see a generalised article (at no fault to the Guardian or the writer), yet I want to see the evidence the EU regulator has, I have been searching for the case notes and so far no luck. Yet in my mind, as I see the issues that those involves on the EU regulator side d not really comprehend the technology. This can be gotten from “According to an analysis of around 1.7bn search queries, Google’s search algorithm systematically was consistently giving prominent placement to its own comparison shopping service to the detriment of rival services“, where is that evidence? Analyses are the results of the applied algorithm (when it is done correct) and in this the advertiser is still the element not begotten. I have seen clients willing to bid through the roof for one keyword, whilst today, I notice that some of the elements of the Bing Ads do not support certain parts, so that means that my results will be impacted for no less than 10%-20% on the same bidding, so is it ‘demoting results of competitors‘, or is the competitor system flawed and it requires bids that are 20% higher just to remain competitive? And if I can already state that there are dodgy findings based on the information shown, how valid is the EU regulation findings and more important, where else did they lack ‘wisdom’?

There are references to AdSense and more important the issue they have, yet when we consider that the EU is all about corporations, these places want facilitation and as they ignored AdSense, that solutions started to get traction via bloggers and information providers. So when we see: “In a second investigation into AdSense, a Google service that allows websites to run targeted ads, the commission is concerned that Google has reduced choice by preventing sites from sourcing search ads from competitors“. Is that so? The larger publishing houses like VNU (well over 50 magazines and their related sites), so in 2005, Google got new clients and as such grew a business. And that was just in the Netherlands. Now those just yanking in a corner, trying to present systems they did not have 4 years later, and they are now crying foul?

There are leagues of comparison sites. One quote I really liked was “Google is like the person that has it all together but is too conservative sometimes, and Bing is like the party friend who is open to anything but is a hot mess”. Another quote is from 2016: “With Bing Ads though, you can only show your ads on the Content Network if you’re targeting the entire US”. So an issue of targeting shown in 2016, an issue that Google AdWords did not have a year earlier. This is important because if you cannot target the right people, the right population, you cannot be competitive. This relates to the system and the EU-regulators, because a seven year ‘investigation’ shows that a year ago, the other players were still lagging against Google, in addition, when we read in the Guardian article: “the EU regulator is further investigating how else the company may have abused its position, specifically in its provision of maps, images and information on local services”, we need to realise that when we relate to cars, the other players are confined to technology of 1989 whilst Google has the Williams F1 FW40 – 2017. The difference is big and getting bigger. It is more than technology, whilst Microsoft is giving the people some PowerPoint driven speech on retention of staff, something that IBM might have given the year before, Google is boosting mental powers and pushing the envelope of technology. Whilst Bing maps exist, they merely show why we needed to look at the map in Google. This is the game, Microsoft is merely showing most people why we prefer to watch them on Google and it goes beyond maps, beyond shopping. As I personally see it, Microsoft is pushing whatever they can to boost Azure cloud. IBM is pushing in every direction to get traction on Watson. Google is pushing every solution on its own merit; that basic difference is why the others cannot keep up (that’s just a personal speculative view). I noticed a final piece of ‘evidence’ in a marketing style picture, which I am adding below. So consider the quote ’51 million unique searchers on the Yahoo! Bing Network do not use GOOGLE’, so consider the fact of those trying to address those 51 million, whilst they could be addressing 3.5 billion searchers.

The business sector wants results, not proclaimed concepts of things to come. Microsoft is still showing that flaw with their new Consoles and the upcoming Scorpio system (Xbox One X), users want storage, not streaming issues. They lost a gaming market that was almost on equal term with Sony (Xbox 360-PlayStation 3), to a situation where it now has a mere 16% market of the Sony market and that is about to drop further still as Nintendo is close to surpassing Microsoft too.

There is always a niche market (many people), who want to kick the biggest player in town, I get that. Yet at present the issues shown and as far as I get the technology, I feel that the EU regulators are failing in a bad way. I might be wrong here and If I get the entire commission papers and if issues are found, I will update this article as I am all about informing people as good and as correct as possible. Yet the one element that is most funny, is that when I open up Internet Explorer and I type in ‘Buy a Washing Machine‘ Bing gives me 8 options, 7 from David Jones and 1 from Snowys outdoors, which is a portable one and looks like a cement mixer. So when was the last time you went to David Jones to watch a washing machine? In Google Chrome I get 6 models on the right side, with 3 from Harvey Norman, 2 from the Good Guys and one from Betta, and that is before I press the shopping tab, so can we initially conclude that Micro$oft has a few issues running at present? Oh and the Google edition gives me models from $345 to $629, Bing prices were $70 for the portable one and the rest were $499-$1499.

This is not on how good one or the other is, this is how valid the EU regulator findings were and so far, I have several questions in that regard. Now, I will be the last one keeping governments from getting large corporations to pay taxation, yet that part is set in the tax laws, not in EU-antitrust. As mentioned the searchers before, I wonder whether the EU regulators are facilitating for players who seem more and more clueless in a field of technology that is passing them by on the left and the right side of the highway called, the ‘Internet Of Things’.

From my point of view Google is doing just fine!

The EU regulator? Well we have several questions for that EU department.

1 Comment

Filed under IT, Media, Science

Two sides of fruit

There are always issues when you get to the topic of fruits. One is the question whether it applies to the members of the US congress (the members of the US Senate are usually labelled as nuts). Is it an issue with actual nutritional products or are we talking about the device that Newton used for gravity? Yes, it is the third one as Newton discovered gravity with an apple.

Yet even here we see two sides at present. The first one is seen with ‘iMac Pro: Apple launches powerful new desktop – starting at $4,999‘ (at https://www.theguardian.com/technology/2017/jun/05/imac-pro-apple-launches-powerful-new-desktop-macbook-starting-at-4999). Here we see the quote “The new iMac Pro starts with an 8-core Intel Xeon processor, but can be configured with an 18-core processor variant, as well as up to 128GB of EEC RAM, 4TB of SSD storage and Radeon Vega discrete graphics cards with up to 16GB of memory“, you see, Apple, like Microsoft, IBM and since resent ASUS have become agents of iterations, true innovation has not been on their shores for too long a time, which is why my new device is for consideration with Huawei and Google alone. Only they have shown the continued race for actual innovation. there is also Samsung, but as I had a legal issue in 1991, I took them from the consideration list, I can hold a grudge like only the Olympian gods can. Still in their defence. the question becomes how can you make a computer truly innovative? It is a question that is not easily answered. there are a few options, yet some of the technology required is still in its infancy here.

In addition, in similar ways, iWork has been unable to grow due to the restrictions (read: limitations) that the suite offers. Instead of trying to persuade the Microsoft Office users (which is not a bad path), iWork has not grown in the directions it could and they are now paying for it through reduced exposure. Still, there remains a valid opposition to my accusation of: ‘have become agents of iterations’. To see this, we cannot just state that there is a new iMac and as such they are merely iterating. There is in addition the issue of hardware versus software. So in my view, a true innovation would have been a Wi-Fi upgrade, not just a faster system, but a system that is keyed to the home and mobile devices. As we are now a little over a year from the first steps of 5G, as we are all more and more connected via different devices, Apple left out in the open a huge sales opportunity by having the options of having devices linked and interlocked. A missed opportunity. You see as bandwidth becomes more and more an issue, as we tend to have a home bandwidth that is 100 times larger, having the option of the auto upgrade manager on your desktop device (iMac). So when you come home, apps and content will be distributed to the devices you want them to placed in. So at home ‘without even thinking’ (sorry Microsoft for using your Windows 95 slogan). the devices will do what needs to be done and you need not mind. You see, as people are trying to push Block chain into every financial corner, those people forgot on how block chains can also be the foundation for users on multiple devices. Now that is not always needed, because we get mail in the cloud, data in the cloud and via the cloud, but that is not for everyone. In addition, people forget about the photo’s they took and they do not always want that in some cloud. There are legions of options here, but at time we want some of this offline. finally, as we do specific tasks (for example on a train), we prefer not to lose too much bandwidth whilst on a train. Tablet and mobile bandwidth can be expensive. In equal size we tend to forget how large some files are and as such we could rush through our bandwidth in no time. This is just one of two options and we have seen very little development in that regard. Apple might want to let others develop it first, but that also leaves them with less when they need to have that additional step forward. It was a mistake Microsoft hid behind for the better part of 2 decades. In that same approach we see how consultancy and project software could benefit a different side in their designs. Now, that is not for Apple to side with, but it could have been an opportunity to grow in new directions. Anyway this is not about starting a fight on 3rd party vs others, this is about iteration vs innovation and Apple has been reluctantly innovative.

This gets us to the other side of it and here I am not siding with Apple, but I am wondering if Apple has been treated correctly. This we see in ‘Apple ‘error 53’ sting operation caught staff misleading customers, court documents allege‘ (at https://www.theguardian.com/technology/2017/jun/05/apple-error-53-sting-operation-caught-staff-misleading-customers-court-documents-allege). Now first let’s take a look at the error 53 part. The issue is that “‘Error 53’ is a message that occurred after updating to iOS 9.0 on iPhones of people who had had their TouchID fingerprint sensor replaced by a repair shop not licensed by Apple. The phones were rendered useless because the operating system update detected a mismatch between the sensor and the phone, and locked the device, assuming unauthorised access was being attempted.

Now here we see two sides.

In the first side we see “Knives damaged by misuse, improper maintenance, self-repair, or tampering are not covered.“, this is something Buck knives has in play. By the way, this comes with a life time warranty so that remains awesome. In addition, for decades TV warranties were voided if unauthorised repairs were made (or repairs by unqualified repairman). With laptops there was Compaq, who would void any warranty if a non Compaq technician had worked on it. They even created special Compaq screwdrivers to keep a handle on it all. So when we see ‘replaced by a repair shop not licensed by Apple‘, I am not certain if the ACCC has a case, they have not acted against Philips, Sony and a few others for the longest of times.

So when I read: “accuses Apple of wrongly telling customers they were not entitled to free replacements or repair if they had taken their devices to an unauthorised third-party repairer” I remain in doubt whether they have a case.

So when we see “Australian consumer law clearly protects the right of a customer to a replacement or free repair if the product is faulty or of unacceptable quality“, which I agree with, yet the owner did not go to Apple, did they? I have had my own issue with Apple in this regard (different device), yet can we agree that when we read: “It is however important to note that if a non-genuine part is fitted to your Toyota and that part’s failure or the incorrect fitment damages your vehicle, then that damage may not be covered by your Toyota Warranty“, so how can something that applies and is valid for Toyota is not valid for Apple?

I believe that ACCC acted out with another agenda. The need for warranty protection by having repairs done by authorised service people has been in the axial of repairs for decades. In addition, when we look at the facts, why would ANYONE go to a third party for warranty repair? That is just insane. So when we read “wrongly telling customers they were not entitled to free replacements or repair if they had taken their devices to an unauthorised third-party repairer“, I am actually wondering how they could come to the conclusion ‘wrongly‘. You see when we read: “Australian consumer law clearly protects the right of a customer to a replacement or free repair if the product is faulty or of unacceptable quality” we now wonder how true that is. You see, warranty is either valid (Apple fixes it for free), or it is beyond the warranty term and you have to pay for it and then it is no longer done for free, so you might select a third party. Yet if this is not an Apple authorised dealer, don’t you have anyone but yourself to blame?

So this is the other side of the apple, what constitutes voided warranty.

You see, if Apple loses this part, I can start repairing Raytheon’s Griffin systems. You see the upgrade (from C to C-ER) and equipment alignment costs are roughly $15,000 per day (excluding parts), if you do not have the proper Service Level Agreement. I can offer to do it for $5,000 a day. so if my work is shoddy (which they will not know until they fire the device, I can be very innovative towards my income), can they apply for warranty at Raytheon, or have they voided their options? You see I will have a NDA with a ‘this repair has been completed to our highest corporate standards’, so I am in the clear and the way the world goes, with 225 upgrades, I will have a decent Christmas this year. Yet at that point the ACCC will not go after Raytheon, it will go after me (what a whuzzes). So how come that the rights of Raytheon are better than those of Apple?

It seems that people assume so much with their mobile devices nowadays, I need to wonder if people comprehend what they buy and what responsibilities come with it. In this the initial question ‘Why did you not take your device to Apple?‘ is one that is not addressed at present and as such I have little faith that the ACCC has a decent case at present (in the shape we saw presented today).

the second and first part interacts as the upcoming shifts will in equal part see new frontiers in Service Level Agreements, Customer Responsibility and the comprehension of the elements covered in a warranty. Because what is included is likely to shift a fair bit over the next 2 years. In addition, innovation is also a shifting concept. Whilst it was “a new idea, device or method”, we (read: the corporate marketing departments) have often seen it as ‘the application of a solution that allows to meet the new or altered requirement of the customer‘ which we get when we iterate with a more powerful processor, more storage, larger screen. So going from 1080i to 5K screens might be accepted as truly innovative, because that took another level of screen and electronics. Yet at times, the pass through of merely upgraded speeds are also seen as innovation, yet at what level is that? When the device remains merely the same to the largest extent, is that not merely iteration?

So here we see the two sides of the other Apple. What we see, what the maker offers and how we both interpret the presented term of innovation.

 

Leave a comment

Filed under IT, Law, Media, Military