Tag Archives: IBM Statistics

One bowl of speculation please

Yup, we all do it, we all like to taste from the bowl of speculation. I am no different, in my case that bowl can be as yummy as a leek potato soup, on other days it is like a thick soup of peas, potato with beef sausages. It tends to depend on the side of the speculation (science, engineering or Business Intelligence) today is Business Intelligence, which tends to be a deep tomato soup with croutons, almost like a thick minestra pomodore. I saw two articles today. The first one is seen (at https://www.bbc.co.uk/news/technology-64917397) and comes from the BBC giving us ‘Meta exploring plans for Twitter rival’, no matter that we are given “It could rival both Twitter and its decentralised competitor, Mastodon. A spokesperson told the BBC: “We’re exploring a standalone decentralised social network for sharing text updates. “We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests.”” Whatever they are spinning here, make no mistake. This is about DATA, this is about AGGREGATION and about linking people, links that too often Twitter has and LinkedIn and Facebook does not. A stage where the people needs clustering to see how to profiles can be linked with minimum connectivity. It is what SPSS used to call PLANCARDS (conjoint module). In this by keeping the links as simple as possible, their deeper machine learning will learn new stage of connectivity. That is my speculated view. You see this is the age where those without exceptional deeper machine learning, new models need to be designed to catch up with players like Google and Amazon, so the larger speculation is that somehow Microsoft is involved, but I tell you now that this speculation is based on very thin and very slippery ice, it merely makes sense that these to will find some kind of partnership. The speculation is not based on pure logic, if that were true Microsoft would not be a factor at all.

But the second article (from a less reliable source is giving us (at https://newsroomodisha.com/meta-to-begin-laying-off-another-11k-employees-in-multiple-waves-next-week/) so they are investigating a new technology all whilst shedding 11% of their workforce. A workforce that is already strained to say the least and this new project will not rely on a dozen people, that project will involve a lot more people, especially if my PLANCARDS speculation is correct. That being said, if Microsoft is indeed a factor, the double stump might make more sense, hence the larger speculative side. Even as the second source gives us ““We’re continuing to look across the company, across both Family of Apps and Reality Labs, and really evaluate whether we are deploying our resources toward the highest leverage opportunities,” Meta Chief Financial Officer Susan Li said at an Morgan Stanley conference on Thursday. “This is going to result in us making some tough decisions to wind down projects in some places, to shift resources away from some teams,” Li added.” Now when we consider the words of Susan Li, the combination does not make too much sense. The chance of shedding the wrong people would give the game away, yes Twitter is in a bind, but it will add full steam in this case and they will find their own solutions (not sure where they will look), a stage that is coming and the two messages make very little sense. Another side might be pushing it if Meta is shedding jobs to desperately reduce cost, which is possible. I cannot tell at present, their CFO is not handing me their books for some weird reason.

Still, the speculation is real as the setting seems unnatural, but in IT that is nothing new, we have seen enough examples of that. So, enjoy your Saturday and feel free to speculate yourself, we all need that at times to TLC our own ego’s.

Advertisement

Leave a comment

Filed under Finance, IT, Science

Chook chook thinking

Why? Because train of thought reads too boring, thats why! So this all happened, or better stated started happening a few hours ago. Someone stated that IBM Z Mainframes are in 96% of all mainframe places. Now, I have no problem with this, I moved out of mainframes 30 years ago, and I still respect what these things can do (they are just too big for my desk). Yet in this, my first question was, what do the other 4% use? A simple question. I got all kinds of answers, yet none of them answered my question ‘What do the other 4% use, in this it does not matter if it is known, but it is essential to look at.

Why?
Well, in this IBM has a luxury problem, they basically own 96% of that market, but the 4% can become 8% then 16%, at that point the message from IBM becomes 4 out of 5 use our mainframe. When the 96% is 120,000 mainframes it is one thing, when it is based on 960 mainframes it is a whole different story. The numbers matter, that has always been the case (even if Microsoft is in denial now they are shedding market share). 

Reasons
There can be a simple reason. For one epidemiology, if it is about real time numbers, the market is slim, massively slim, compared to that market a size zero model is a mere chunky blobernaut. Cray is one of the few players in that setting and it makes sense that a Cray is there where an IBM is optionally not. Still, I would want to know.

You see, in strategic thinking we have two elements we ALWAYS need to keep one eye on. One is threat the other is weakness. In this example real-time data management is a weakness. Now we need to understand that this market is set to billions and those who desperately need it, that number is not an issue, yet for IBM investing that much for 4% is tactically not sound, not until that marketshare is a lot larger. That makes perfect sense and let’s face it no one owns 100% of a market, if that ever happens we will have a lot more problems than we could possibly understand. 

Why do I care?
Well, for the most I do not, but at present I am not to involved with any SWOT analyses, and the ones I did lately was done for wannabe managers who seemingly only understand bulletpoint memo’s. The idea of any strengths, weaknesses, opportunities, and threats (SWOT) analyses that is related to business competition, project planning and capability planning is more important than most people realise. We see it in intelligence, business intelligence and market intelligence. And now we see two new real markets emerging where it is important too. Gaming and SAAS/GAAS. Even as GAAS is still some time away, the need to actively SWOT in all three is there and I believe the players are not too finicky about that and they need to be. As the cloud is oversold and the dangers are underestimated their board of directors need to hold up a mirror where they can tell themselves that it doesn’t matter, and when we understand how completely those people are lying to themselves, at that point you might get the idea that there is a problem. The SWOT has more sides, it tests your capability, your software (Strengths and opportunities) but that needs to be levelled by weaknesses and strength. 

800 years ago
To understand this we need to go back to the good old days (Ghengis Khan). It was he who stated “It is not enough for me to win, my opponents must all fail”. Yes, I admit it is a massively loose translation but it applies to the now. When we stumble over sales people and their unnatural large ego’s, we tend to listen because they make the loudest claims, yet are they valid? Consider Solarwinds and what they enabled criminals to do, when you consider the news last week when we were given ‘SolarWinds hackers stole US sanctions policy data, Microsoft confirms’, it was a weakness and a threat, so when we how long the hack was active and that we now see that policy data is online and open for anyone to look into, what other sides are not yet known? It is not enough for SAAS vendors to look at SWOT, their customers need to do the same thing. So when I considered the 4% is was not because I need to know everything (which at times is still nice as a high executive CIA decision maker has a girlfriend that has size 6 lingerie, his wife is size 11), so who needed to do the SWOT, someone at the CIA or me? One could say both as I am his threat and he is my opportunity. 

The stage of what is what could be remains forever in motion. 

So where from here?
That remains open. For players like Amazon, the enabling of GAAS becomes more and more important, especially when you see the blunders that players like Ubisoft makes, they need to be aware of where their customers are, especially when Netflix becomes active in gaming too. They will have an advantage, but Amazon can counter it, yet there are sides that remain unknown for now and they should not be (not on that level) and there is the rub. Too many rely on external solutions when that solution needs to be in-house. And we can disperse with all the marketing BS that some give like “We are a better company now”, when you drop the ball to that degree there was a massive space for improvement and you merely are on par for not being where you should have been a year ago. An old IBM Statistics wisdom was “You’ll know when you measure”. This sounds corny but it is true, you cannot anticipate and adjust when there is no data and in all this any SWOT analyses would have been usable data. So where was the 4%? I do not know and the poster seemingly did not know either. It might be fair enough, yet when that 4% becomes 8%, when should you have known? It is a question with a subjective answer. Yet in gaming it is less so, especially as I am becoming aware (unproven at present) that Microsoft has one nice trick up their sleeve. There is partial evidence out there that Skyrim will be on PS5 in digital formal only. Several shops now have a ‘DO NOT USE’ for any physical PS5 format of Skyrim. Now, there might be an easy answer for this after all these lockdowns, but it is only 4 weeks away now, so you tell me. Is Microsoft playing its ‘bully’ card? Are they trying to push people to Xbox? It is a fair approach, they did pay 8 billion and change for it, but consider that their actions are set to a larger stage. A stage of millions of angry fans. I solved it for them by creating public domain gaming ideas for any Sony exclusive RPG game. I am not Bethesda, I am a mere IP creator, but when software makers are given a free ride towards Sony exclusives and even if one game hits the mark, the Bethesda market share dwindles to a lower number. Now consider what happens when that happens on Amazon Luna too? I might be a mere 1% factor, but if another one joins me I grow 100% whilst Microsoft dwindles more. For Microsoft Amazon is becoming a real threat and a weakness, for Amazon Netflix is optionally a threat and a weakness whilst Google Stadia is optionally the opportunity for Amazon. 

All SWOT settings that could have been seen from afar from the beginning. It is not everyones train of thought, yet in this day and age, I think it needs to be, the markets and our lives are changing in all kinds of ways too quickly and too large, we need to think head and having a clear grasp on how to apply SWOT in our lives might become essential. 

The difference?
That is a much harder line to follow. It comes down to the word ‘Insight’ and it is a dangerous, a very dangerous word. Because depending on the person this can be Insight, speculated insight, expected insight, and adjusted insight and more than once they are all on one pile making the data less reliable. Insight is also subjective, we all see it differently and that does not mean that I am right and everyone else has a wrong station. No, it is all subjective and most CAN be correct, but as the insight is disturbed by speculated, adjusted and expected versions, the numbers alter slightly. And now we see that 4% was not 4%, is was 7% and 5%, 5% because there were other IBM mainframes in play (adjusted) and 4% was the speculated number and 7% was the expected number. Now we have a very different station, the expected moves us from 96% use our product, towards 9 out of 10 are our customers, which is now a mere step towards 4 out of 5 use IBM. So would you like to bring that conversation to any board of directors? 
They’ll serve your balls for dinner (see image). 

Still feel certain that you do not want to know? In reality most SWOT analyses are seemingly pointless and often amazingly boring, yet in this day and age they are an essential part of business and gaming at $130 billion a year is facing that side as well. So when you consider what I gave you also consider the impact that some shops have ‘DO NOT USE’ for Skyrim preorders, 4 weeks before release, lockdown or not, it beckons all kinds of questions. And to be fair, there could be a simple explanation for all of it, but that too is the consequence of trying to create hypes via YouTube without clearly informing the audience. It is a weakness Microsoft has shown a few times (Bethesda was never completely innocent, but equally never this guilty). 

So what has a game in common with a business setting? It is simple, they both need to manage expectations and that too is a side of SWOT, even as marketing often merely focusses on opportunity, there is a weakness and a threat. The lack of clarity and misinformation are both a weakness (angry customers) and a threat (churning customers) and in the world of gaming the churners are the real danger, they can get the flocking population of angry gamers to come with them and really make numbers spiral downward. In this day and age SWOT is an additional essential way to go, in nearly all walks of life. We simply can not avoid being that naive anymore, not with spiralling energy prices and more and more articles that can at present no longer be found in any supermarket, all whilst plenty of people are in a holding pattern for their incomes. 

It is a train of thought and it is up to you to decide if you want to do it or not, because that was always your right, the right to ignore, but it must be said that it will be at your own peril. 

Leave a comment

Filed under Gaming, IT, Science

Iterating towards disaster

Yes, that happens, we all consider it, but did anyone thought it through? You see, innovation is essential in staying ahead, iteration tends to give you a 2 year advantage, innovation gives you a 5-7 years leap. That is not new, it has been a ‘fact’ of life for 3-4 decades. Yet that premise is about to change, it will change a lot and it will change towards the bad side of the pool. To see this we need a few items, the first is an article, an article that the Guardian gave us with ‘I’m sorry Dave I’m afraid I invented that: Australian court finds AI systems can be recognised under patent law’ (at https://www.theguardian.com/technology/2021/jul/30/im-sorry-dave-im-afraid-i-invented-that-australian-court-finds-ai-systems-can-be-recognised-under-patent-law), you see there is a danger here, even as the Guardian gives us “Allowing machine inventors could have numerous consequences, both foreseeable and unforeseeable. Allowing patents for inventions churned out by tireless machines with virtually unlimited capacity, without the further exercise of any human ingenuity, judgment, or intellectual effort, may simply incentivise large corporations to build ‘patent thicket generators’ that could only serve to stifle, rather than encourage, innovation overall.” This we get in the article from Australian patent attorney Dr Mark Summerfield, and he is right, you see, there is a larger danger here. It is not merely that only a few companies can AFFORD such an AI, the larger stage is that if we combine this and we add a little statistics to the pile, we get a new setting. 

SPSS (now IBM Statistics) has something called the conjoint analyses. To understand this, we need to take a look at the manual. There we see:

Conjoint analysis presents choice alternatives between products defined by sets of attributes. This is illustrated by the following choice: would you prefer a flight that is cramped, costs $225, and has one layover, or a flight that is spacious, costs $800, and is direct? If comfort, price, and duration are the relevant attributes, there are potentially eight products:

Product Comfort Price Duration
1 cramped $225 2 hours
2 cramped $225 5 hours
3 cramped $800 2 hours
4 cramped $800 5 hours
5 spacious $225 2 hours
6 spacious $225 5 hours
7 spacious $800 2 hours
8 spacious $800 5 hours

Given the above alternatives, product 4 is probably the least preferred, while product 5 is probably the most preferred. The preferences of respondents for the other product offerings are implicitly determined by what is important to the respondent. Using conjoint analysis, you can determine both the relative importance of each attribute as well as which levels of each attribute are most preferred.

This is all statistical science and it works, but the application can be changed. If data is the only premise here, we see the application in another way. What if the AI is taught the categories that enable a unique stage to own ANY patent field. Consider that this is not about a flight, what if this is about a processor.

Product Speed Processor Sampling
1 X Sycamore Bozon
2 X Sycamore Instantaneous Quantum Polynomial
3 X Tangle Bozon
4 X Tangle Instantaneous Quantum Polynomial
5 Y Sycamore Bozon
6 Y Sycamore Instantaneous Quantum Polynomial
7 Y Tangle Bozon
8 Y Tangle Instantaneous Quantum Polynomial

I am merely making a fictive sample with existing names, but what if the math of conjoint is tweaked to cover the quantum field to a larger degree, a computer can do this faster than any person and it can even start making the documents, so the AI can create a set of patents that cover the entire field, with a setting where less than 20 patents will stop commercial competitors to get traction in this field and this is not merely speculation, I feel that this is where we go to and now the big tech companies will own it all and the AI’s will have the entire patent field. Yes, there will be holes in the beginning, but as patent filing will overturn normal filings, the patent field will end up being owned by Google, IBM and Amazon. I have nothing against any of these three, but this is not what I (or anyone else) signed up for. I might just put all my 5G IP online making it all public domain, just to temporarily deflate the AI premise.

And personally, there is no way that either of the three had not considered this application, making the AI patent field a lot more debatable and I reckon that the larger law field is looking into that. In 2012 a total of 1,892 filings were made, now consider that an AI could cover a larger field with a mere 300 filings. That is not out of the realm of considerations, as such the Australian case we see in the Guardian could well end up with all kinds of nasty surprises if the stage of “The decision by the Australian deputy commissioner of patents in February this year found that although “inventor” was not defined in the Patents Act when it was written in 1991 it would have been understood to mean natural persons – with machines being tools that could be used by inventors” is not overturned. Will it? I cannot tell, but it opens a whole range of doors and some of them will end up being rather nasty.

Leave a comment

Filed under IT, Law, Politics, Science

Gaming on a serious level

Yup, one sees a game, the other sees an application and the third sees a solution, that is how it is, how it, for the most has always been. I got introduced to Palantir in 1998 or 1999, I got access and took a look at it. At the time I was working for other parties and I noticed that Palantir government had a setup the was nice, it was not what we now call IBM Miner, but it had potential. So when I got introduced to the news giving me ‘Secret and unprofitable Palantir goes public’ I took notice. You see, I started to wonder what was happening, the quote “Seventeen years after it was born with the help of the CIA seed money, data-mining outfit Palantir Technologies is finally going public in the biggest Wall Street tech offering since last year’s debut of Slack and Uber”, it gets to be a little worse when we consider “Never profitable and dogged by ethical objections for assisting in the Trump administration’s deportation crackdown, Palantir has forged ahead with a direct listing of its stock, which is set to begin trading on Wednesday”. You see the setting is not great for Palantir and as I see it, over 17 years they made their own bed, this is seen with “The company has just 125 customers in 150 countries”. Now, I can claim that I am not the brightest person (even though I passed the Mensa requirements), but the stage of 125 customers in 150 countries is not manageable. Even as they ‘hide’ behind “Our software is used to target terrorists and to keep soldiers safe”, you see, the software has a foundation and a base. Even as one foundation part is to hunt terrorists, the base is to analyse data. I can hunt terrorists with IBM Statistics, IBM Miner and Mapping software, it might not be fast, but it will get me there (well, mostly anyway), so in the setting we see with Palantir, we see a larger failing, especially over 17 years. They had well over a decade to extent the bae and create an additional foundation, optionally getting another 125 customers, yet that was not what they did, is it? So when we see “Palantir paints a dark picture of faltering government agencies and institutions in danger of collapse and ripe for rescue by a “central operating system” forged under Thiel’s auspices”, I merely see an excuse. You see Palantir has no need or reason to rely on a station with ‘faltering government agencies’, by extending the base and creating another foundation they would not need to rely on the side and add an optional third foundation called reporting. The need for washboarding and sliceable presentations have been a larger requirement for close to a decade, these options are required in the intelligence world as well, leaving it up to others means the the slippery slope of business intelligence becomes smaller and less pronounced, a place that relies on long term vision has been lacking that a lot, has it not?

Even as Scott Galloway from New York University gives us “They’re massively unprofitable and they’ve never been able to figure it out”, the obvious question becomes, were they unfocussed, uncaring or just lazy? The vendor the relies on government jobs can’t rely on them for more than 2 years, if the program is not showing forward movement, there is no long term justification and when we see “Palantir has accumulated $3.8bn in losses, raised about $3bn and listed $200m in outstanding debt as of July 31”, we see the faltering position that Palantir is in. It cannot rely on the customer base it has, because well over a third has extended its credit card too much, as such they need to adapt to a form of Business Intelligence gathering, data mining, slicing and washboarding and set a new stage in long term reporting. As I see it, Banks and financial institutions will have extended Business intelligence needs and additional needs as well. If you think that financial fraud is big now, wait until banks automate under 5G, it will be a tidal wave 5-10 times the one the banks face now and they will need to have additional ways to find the transgressors, relying on the police will be a monumental waste of time, which is not the flaw of the police, it is the consequence of the times and their needs. I state financial institutions, because it is not merely the banks, it is the credit crunch seekers that will need to find the people with outlandish debts and as the laws will adjust because the banks will no longer accept that the wife gets the house so that they can live in luxury of what they could not afford, the game ends soon enough, the credit drive will force change and there would be a market for Palantir if they adjust. They need to adjust faster the they are ready for, but the current agenda does not allow sleeping at the helm. As I personally see it (on small and debatable data), Peter Thiel took too long and even as we are being told “winning a modest contract early in the COVID-19 pandemic for helping the White House gather data on the coronavirus’s impact”, I wonder how the data collection part was achieved, in light of all the places where no data gathering correctly existed, the stage of the gathered data becomes debatable. 

The article (at https://www.aljazeera.com/economy/2020/9/30/palantir-goes-public-in-biggest-wall-street-tech-offering-of-2020) as a lot more debatable parts, in all they are tracks that could have been highlighted by adding a few commercial data gatherers to the fold from day one. There is the other need for a setting of adjustment and weighing of origin data, all whilst all the data is scrutinised. I reckon that this would set a stage where the findings of Sarah Brayne would be considered in house and not after certain stages went live (or perhaps they were merely ignored). She found “the Los Angeles Police Department’s use of Gotham, found the software could lead to a proliferation of unregulated personal data collected by police from commercial and law enforcement database”, I will add to this, the setting that the software was designed to people employing trade craft, they would be outliers on the entire board, a setting that rates questions on people who seek cheap solutions because of budget, seek evasion because of divorce and outstanding bills, the acts are similar but not terrorist in nature.

OK, I admit, I do not know the exact setting in LA (other that Lucifer is their consultant), but the setting of outlier data came to mind in the first 10 seconds, and the finding of Sarah Brayne and ‘proliferation of unregulated personal data’ supports that, apart from the fact that unregulated data tends to be debatable and optionally in part or completely incorrect, data mining gives us the option to clean if the sources are known, unregulated personal data takes the out of the equation because the origin of the data (the person adding and manipulating data) is unknown and as such the data becomes unreliable. 

That is a lesson that banks would have told them quickly, if not them, then players like Equifax, because Palantir will end up in their fairway, the odds would not be even for Palantir. Yet Palantir needs to grow if they are to exist in a stage after tomorrow, to the there is no doubt, the US, UK and most EU nations cannot continue on the intelligence data foundations that they currently are. So as we see that, how many customers could Palantir lose? Growth is as I see it the only path that remains, banks are the most visible needling of more intelligence gathering, but they are not alone and Palantir needs to gird their loins.

 

Leave a comment

Filed under Finance, IT, Military, Politics

Brotherhood of Heineken

As we stepwise push forward towards 5G, we think that it all stays the same, it will not. A few parts will change forever. Google has an enormous advantage, yet they too are now pushing for different changes, changes that they had not seen coming a mere year ago. In this case there is no direct link to my IP, so I am happy to give you all the inns and outs of that part (pun intended).

To start this we need to consider a few sides, all with their own premise. The first is the focal point:

4G: Wherever I am
5G: Whenever I want it

That first premise is a large one, it is not a simple localisation part, it is all about getting access at a moment’s notice, yet what we need access to changes with the push we face. The initial part is the creation and the impact of awareness. As we re-distinguish ‘awareness’ the metrics on awareness will also change and for the first year (at the very least) market research companies on a global stage will be chasing the facts. They have become so reliant on dash boarding, Tableau, Q-view and Q Research Software will all have to re-engineer aspects of their software as they fall short. Even the larger players like SAS and IBM Statistics will require an overhaul in this market space. They have been ‘hiding’ behind the respondent, responses and their metrics for too long, the entire matter when the respondent becomes the passive part in awareness is new to them, and that is all it is, it will be new to them and the constructs that are behind the active and passive interactions will change the metrics, the view and the way we register things.

Google has the advantage, yet the stage for them will take a few turns too. Their initial revenue stream will change. Consider the amount of data we are passing now, that amount also links to the amount of ads we see. Now consider that everything in 5G is 10 times faster, yet 10 times more ads is not an option, so they now face revenue from 10% of the ads compared to what we see now. In addition to that, as we adjust our focus on the amounts we face implies that more advertisement space is optionally lost to the larger players like Google and this too impacts the stats for all involved. Google will adjust and change, in what way, I cannot tell yet, but the opposition is starting to become clear a in this example we see Heineken, a global established brand who now has the option to take the lead in 5G awareness.

Introducing

Ladies and gentleman, I am hereby introducing to you the Brotherhood of Heineken, in this fraternity / maternity, we invite all the lords and ladies of their household to become awareness creators towards their brand. In the Netherlands thousands are linked through a company like Havenstad and similar operations, this stretches through Europe and all over the place going global. These lords and ladies can earn points in the simplest thing, by setting a stage for Heineken to spread the message, we see that the initial power is with the consumer to support their brand. Awareness and clicks are converted to points and that leads to exclusive offers and rewards. Consider the unique stuff that Heineken has given to its professional public now for all to get, to buy and to earn. Bags, coolers, clothing, accessories. For decades we saw the materials created and most of us were envious of anyone who had that part others did not, now we could all earn it and because Heineken (Coca Cola too) have created such an arsenal, these players could take the lead in pushing their own awareness to new levels.

Now it is easy to say that Google is already doing this and that is partially true, but that equation will change under 5G and these really large brands could pay a fortune to Google or take the lead and create their own powerhouse and in this day and age that powerhouse will become more and more an essential need. Anyone not looking and preparing to this will hand over opinion and choice to Google and watch how that goes, yet consider that some sources gave us a quarter ago: “Google will remain the largest digital ad seller in the world in 2019, accounting for 31.1% of worldwide ad spending, or $103.73 billion“, now consider that they need to grow 20% quarter on quarter and that in two years that metric has changed and as such the ads could cost up to 30% more, now do the math on how YOU will survive in that environment.

Samsung, Proctor & Gamble, Coca Cola, Nike, Heineken, Sony, Microsoft will all face that premise and that is how it all changes. As we see that the metrics will have reduced reliability, the market research players will need time to adjust and in that lull a player like Heineken can create its own future and set its digital future in another direction to exceed their required expectations. This step seems short now, but as the stage alters it becomes an essential stage. Google may remain in denial and oppose that this will never happen, but the data and metrics are already suggesting this path and that is where we are now; the option to be first or pay the invoice, what would you do?

I believe that the visibility starts to get a little focal just before 2020 games, and it is in full view before the 2022 Beijing Winter Olympics, and in full swing by the time the 2022 FIFA World Cup in Qatar starts. These two are close together and the people will pay through the nose for that visibility, especially the European parties in all this. I expect a more evolved 5G advertising stage via apps as well, seeing ads to unlock premium view and data is likely to happen, all this is coming to us and our view of advertisement will alter to a larger extent. We will be told that this will never happen, it is not how they work, yet they are deceiving and lying to us. Consider that change in the last 25 years alone, in 1994 advertisement through printed medium and TV was at an all-time high, they all claimed it remained this way, within 5 years that stage was already changing with online ads to some extent and the slowing of printed medium, in addition the international channels would push into national advertisement. A mere 5 years after that (in 2004) it started to take off in earnest and would increase revenue to over 100% in the 4 years that followed. Between 2005 and 2017 that would push from $6 billion to 26 billion, do you really think that their words holds true? To keep that growth and their need for greed the metrics and approach has to change, there is 0% chance that these players will accept a growth of data based impact of a mere 10% of what is was in 4G, there is too much riding on this.

For the largest players there is an alternative and it will not take long for them to set the stage to this and start finding their own solution to keep awareness as high as possible. If you have to pay through the nose to keep awareness or create the environment to reward achieved awareness, what path would you choose?

Let’s not forget players like Heineken did not get to the top by merely offering a really good product, they offered a lot more, a view, an awareness that all embraced; Sony learned that lesson the hard way by losing with a superior product against the inferior competition (Betamax versus VHS). 5G will set a similar yet new battle ground and for the most the media is seemingly steering clear for now.

That is with the nice exception of Marketing Interactive, who gives us (at https://www.marketing-interactive.com/going-beyond-the-big-idea-creative-leads-on-5gs-impact-on-advertising/) “There is no denying that the rollout of 5G will change storytelling and the consumer journey“, it is a true and utterly correct view. They also give us: “creatives need to evolve from old habits and stop hiding behind “the big idea”. “We, as creatives, need to evolve from old habits, stop hiding behind “The Big Idea” and evolve our creative process and creative structures to be based on this new digital reality, to create content based on this new innovative context“, this is the view from Joao Flores, head of creative, dentsu X Singapore and he is right. We also get “For agencies, the opportunity calls for unorthodox alliances to make sure our creativity is the beating heart of this quiet revolution“, which is true, but it ignores the alternative path where the largest players start getting this path in house and in light of the two revelations, we see that during the last decades players like Heineken had been doing just that and that makes them ready to take on the 5G behemoth and push the others into second place or worse. There is a need to have expertise and many do not have it, but in that Heineken has been different for the longest times. It is most likely due to the unique view that people like Freddie Heineken had on their market and consumers. You merely have to realise that they were the first to embrace ‘Geniet, maar drink met mate‘ (enjoy, temper your drinking) it was a slogan that came into play around 1990, as well as ‘Drink verantwoord. Geniet meer‘ (drink responsibly, enjoy it more). All pushes to set a better stage, it is there that we see that a new push could be produced by players like Heineken.

We see so many more paths opening, but in all this the one overwhelming side is not what paths there are, but the stage of metrics that they all rely on, as such having control on the expenses as well as the foundation to create a reliable stage for their metrics will be a first soon enough. Not merely: ‘Who is your population?‘, it is the stage where the passive and active awareness can be differentiated on, that too will push advertisements and the applied visibility through 5G apps and 5G advertising and how the funds are spent, that will be the question that impacts player like Google Ads on the next 24 months, because if they do not do that, their quarter on quarter growth will suddenly take a very different spin, and they are not the only ones affected.

 

Leave a comment

Filed under IT, Media, Science

Deadlock removed

Forbes gave us news in several ways. It merely flared my nostrils for 0.337 seconds (roughly) and after that I saw opportunity knock. In all this Microsoft has been short-sighted for the longest of times and initially that case could be made in this instance too. Yet, I acknowledge that there is a business case to be made. The news on Forbes with the title ‘Why Microsoft ‘Confirmed’ Windows 7 New Monthly Charges‘ (at https://www.forbes.com/sites/gordonkelly/2018/09/15/microsoft-windows-7-monthly-charge-windows-10-free-upgrade-cost-2) gives us a few parts. First there is “Using Windows 7 was meant to be free, but shortly after announcing new monthly charges for Windows 10, Microsoft confirmed it would also be introducing monthly fees for Windows 7 and “the price will increase each year”. Understandably, there has been a lot of anger“. There is also “News of the monthly fees was quietly announced near the bottom of a September 6th Microsoft blog post called “Helping customers shift to a modern desktop”“, so it is done in the hush hush style, quietly, like thieves in the night so to say. In addition there is “Jared Spataro, Corporate Vice President for Office and Windows Marketing, explained: “Today we are announcing that we will offer paid Windows 7 Extended Security Updates (ESU) through January 2023. The Windows 7 ESU will be sold on a per-device basis and the price will increase each year.” No pricing details were revealed“. This is not meant for the home users, it is the professional versions and enterprise editions, that is meant for volumes and large businesses. So they now get a new setting. Leaving pricing in the middle, in the air and unspoken will only add stress to all kinds of places, but not to fret.

It is a good thing (perhaps not for Microsoft). You see, just like the ‘always online’ folly that Microsoft pushed for with the Xbox, we now see that in the home sphere a push for change will be made and that is a good thing. We all still have laptops and we all still have our Windows editions, but we forgot that we had been lulled to sleep for many years and it is time to wake up. This is a time for praise, glory, joy and all kinds of positive parts. You see, Google had the solution well over 5 years ago, and as we are pushed for change, we get to have a new place for it all.

Introducing Google Chromebook

You might have seen it, you might have ignored it, but in the cast of it all. Why did you not consider it? Now, off the bat, it is clear if you have a specific program need, you might not have that option. In my case, I have no need for a lot of it on my laptop, yes to the desktop, but that is a different setting altogether.

So with a Chromebook, I get to directly work with Docs (Word), Sheets (Excel) and Slides (PowerPoint) and they read and export to the Microsoft formats (as well as PDF). There is Photos, Gmail, Contacts and Calendar, taking care of the Outlook part, even Keep (Notes), Video Calling and a host of other parts that Microsoft does not offer within the foundation of their Office range. More important, there is more than just the Google option. Asus has one with a card reader allowing you to keep your files on a SD card, and a battery that offers 7-10 hours, which in light of the Surface Go that in one test merely gave 5 hours a lot better and the Chromebook is there for $399, a lot cheaper as well. In this it was EndGadet that labelled it: ‘It’s not perfect, but it’s very close.

Asus has several models, so a little more expensive, but comes with added features. In the bare minimum version it does over 90% of whatever a student needs to do under normal conditions. It is a market that Microsoft could lose and in that setting lose a lot more than merely some users. These will be users looking for alternatives in the workplace, the optional setting for loss that Microsoft was unable to cope with; it will now be on the forefront of their settings. In my view the direct consequence of iterative thinking.

And in this it is not merely Asus in the race, HP has a competitive Chromebook, almost the same price, they do have a slightly larger option 14″ (instead of 11.9″) for a mere $100 more, which also comes with a stronger battery, and there is also Acer. So the market is there. I get it, for many people those with stronger database needs, those with accounting software needs, for them it is not an option and we need to recognise that too. Yet the fact that in a mobile environment I have had no need for anything Microsoft Specific and that there Surface Go is twice the price of a Chromebook, yet not offering anything I would need makes me rethink my entire Microsoft needs. In addition, I can get a much better performance out of my old laptop by switching to Linux, who has a whole range of software options. So whilst it has been my view that Microsoft merely pushed a technological armistice race for the longest time, I merely ignored them as my windows 7 did what it needed to do and did it well, getting bullied into another path was never my thing, hence I am vacating to another user realm, a book with a heart of Chrome. So whilst we look at one vendor, we also see the added ‘Microsoft Office 365 Home 1 Year Subscription‘ at $128, so what happens after that year? Another $128, that whilst Google offers it for free? You do remember that Students have really tight budgets, do you not? And after that, students, unless business related changes happen, prefer a free solution as well. So whilst Microsoft is changing its premise, it seems to have found the setting of ‘free software’ offensive. You see, I get it when we never paid for it, but I bought almost every office version since Office 95. For the longest times issues were not resolved and the amount of security patches still indicates that Windows NT version 4 was the best they ever got to. I get that security patches are needed, yet the fact that some users have gone through thousands of patches only to get charge extra now feels more like treason then customer care and that is where they will lose the war and lose a lot.

So when you see subscription, you also need to consider the dark side of Microsoft. You partially see that with: “If you choose to let your subscription expire, the Office software applications enter read-only mode, which means that you can view or print documents, but you can’t create new documents or edit existing documents.” Now we agree that they clearly stated ‘subscription’, yet they cannot give any assurances that it will still be $128 next year, it could be $199, or even $249. I do not know and they shall not tell, just like in Forbes, where we saw ‘News of the monthly fees was quietly announced‘.

When we dig deeper and see: ‘Predicting the success of premium Chromebooks‘, LapTopMag treats us to: “The million-dollar question is whether these new, more expensive Chrome OS laptops can find a foothold in a market dominated by Windows 10 and Mac OS devices. Analysts are bullish about Chromebook’s potential to make a dent in the laptop market share“, which was given to us yesterday. Yet in this, the missing element is that Windows will now come with subscriptions to some and to more down the track, or lose the security of windows, now that picture takes a larger leap and the more expensive Google Pixelbooks (much higher specs then the others mentioned) will suddenly become a very interesting option. One review stated on the Pixelbook: “the Pixelbook is an insanely overpowered machine. And, lest we forget, overpriced“, which might be true, yet the little lower Atlas Chromebook was $439. So yes, the big one might not be for all and let’s face it. A 4K screen is for some overkill. That’s like needing to watch homemade porn in an IMAX theatre. The true need for 4K is gaming and high end photography/film editing, two elements that was never really for the Chromebook. At that point a powerful MacBook or MacBook pro will be essential setting you back $2900-$11400. So, loads of options and variations, at a price mind you. As I see it, the Microsoft market is now close to officially dissolving. There is a whole host of people that cannot live without it, and that is fine. I am officially still happy with my Windows 7, always have been. Yet when I see the future and my non-gaming life, Linux will be a great replacement and when being mobile a Chromebook will allow me to do what I need to do. It is only in spreadsheets that I will miss out a little at time, I acknowledge that too, but in all this there is no comparison with the subscription form and as it comes from my own pocket is see no issues with the full on and complete switch to Google and its apps in the immediate future. I feel close to certain that my loss will minimal at the most. A path that not all will have, I see that too, but when thinking the hundreds of thousands of students that are about to start University, they for the most can make that switch with equal ease and there we see the first crux. It was the setting that Microsoft in a position of strength had for the longest time, enabling students so that they are ready for the workplace changes. They will now grow up with the Chromebooks being able to do what they need and they will transfer that to the workplace too. Giving us that the workplace will be scattered with Chromebooks and with all kinds of SaaS solutions that can connect to the Chromebook too. The Chromebook now becomes some terminal to server apps enabling more and more users towards a cloud server software solution. As these solutions are deployed, more and more niche markets will move in nibbling on the Market share that Microsoft had, diminishing that once great company to a history, to being pushed beyond that towards being forgotten and at some point being a myth, one that is no longer in the game. It is also the first step that IBM now has to bank in on that setting and push for the old mainframe settings, yet they will not call it a mainframe, they will call it the Watson cloud, performing, processing and storing, available data on any Chromebook at the mere completion of a login. It is not all there yet, but SPSS created their Client server edition a decade ago, so as the client becomes slimmer, the Chromebook could easily deal with it and become even more powerful, that is beside the optional dashboard evolutions in the SaaS market, the same could be stated for IBM Cloud and databases. That is the one part that should be embraced by third party designers. As SaaS grows the need to look in Chromebook, Android and IOS solutions will grow exponentially. All this, with the most beautiful of starting signals ever given: ‘Windows 7 New Monthly Charges‘, the one step that Microsoft did not consider in any other direction and with G5 growing in 2021-2023 that push will only increase. If only they had not stuffed up their mobile market to the degree they had (my personal view). I see the Windows Mobile as a security risk, plain and simple. I could be wrong here, but there is too much chaff on Windows and as I cannot see what the wheat is (or if there is any at all), and as Microsoft has been often enough in the ‘quietly announcing‘ stage and that is not a good thing either.

Should you doubt my vision (always a valid consideration), consider that Veolia Environnement S.A. is already on this path. Announced less than two weeks ago we see “So we propose a global migration program to Chromebooks and we propose to give [our employees] a collaborative workplace. “We want to enable new, modern ways of working”“, linked to the article: ‘Veolia to be ‘data centre-less’ within two years‘ (at https://www.itnews.com.au/news/veolia-to-be-data-centre-less-within-two-years-499453), merely one of the first of many to follow. As the SaaS for Chromebooks increases, they will end up with a powerful workforce, more secure data and a better management of resources. Add to this the Google ID-Key solution and the range of secure connections will go up by a lot, diminishing a whole host of security issues (or security patches for that matter). All options available now and have been for a few years now. So when we see the Chromebook market push forward, we should thank Microsoft for enabling exponential growth; it is my personal believe that the absence of a monthly fee would have slowed that process considerably in a whole range of markets.

So thanks Microsoft! You alienated gamers for years, and now we see that you are repeating that same silly path with both starting students and businesses that are trying to grow.

I’ll ask Sundar Pichai to send you a fruit basket, it’s the least I can do (OK, the least I can do is nothing, but that seems so mean).

 

Leave a comment

Filed under IT, Media, Science

Who’s Promptly Promoted?

The Guardian is giving us the news that Moody is downgrading WPP (at https://www.theguardian.com/business/2018/apr/17/moodys-downgrades-wpp-martin-sorrell-departure-ratings-agency-negative). It is a weird situation! You see, some do not like Sir Martin Sorrell (I personally never knew him), some like the man and some think he was a visionary. I think I would fall in the third category. There is no way that under normal situations the departure of a CEO, even a founder would have had such a massive impact when he left and let’s be clear when a departure sparks not just the downgrade of WPP, but we also see “WPP has hired a New York-based recruitment firm as it begins the global search to replace founder and chief executive“, his impact has been a hell of a lot larger than anyone is willing to admit. There are however other parts. When I see “In Moody’s view, the high-profile departure of Sir Martin Sorrell raises concerns over the future strategy and shape of the group, increases client-retention risk and could hence hinder WPP’s ability to meet its 2018 guidance“, I feel a strong desire to disagree. When we consider that within WPP is Millward Brown, TNS and IMRB, we need to acknowledge that WPP already had problems. You see, I was a partial witness to the laziness and stupidity, I saw how executives looked at presentations, were unwilling to listen and it was their right to do so, but in the end part of their market got screwed over. You see SPSS was the big analytic and as a program it is still the Bentley for analysing data. Yet beyond the program the corporation faltered. It fell to meetings, and presented concepts, yet no delivery. I still have the presentations, 1994 parallel processing, never came to be. Yet the biggest bungle was seen in 1997, when SPSS acquired Danish software company In2itive Technologies Corp. They had actual perfect software. The interface was intuitive and flawless. I was so looking forward to teaching people this software and for a while did. It was amazing to see dozens of people literally making a running start in their own designs in an hour, by the end of the day they did all kinds of things that most market researchers could not conceive. It was a jackpot acquisition. Yet SPSS had its own Data entry solution called Data Entry and apart from a few flaws it had regarding memory and larger data entry sheets, it worked really well, it was a work horse, so internally we were so happy to hear that it had become a Windows program. The backlash was Titanic in proportions. It was hard to work, the initial versions weren’t even stable, there was processing power issues, saving issues and a whole range of issues that were not solved, not even within the first year. It was all about the holy ‘Data Entry‘ and whilst the issue of the perfect In2itive was set to the sides and whilst the internal corporate marketing decided that Data entry was a ‘Form Design Program‘, the audience was left without quality Data Entry. So as I (and others) pleaded for In2Form and its suite to be evolved and set towards the users, we were told it was merely a 16 bit program, and SPSS is 32 bit and larger only (mainframes excluded). So there I was watching the mess evolve for well over 3 years whilst the redesign of a 32-bit In2itive suite would have been done in 160 days (rough estimate), no, at SPSS they really knew what they were doing. So they decided to up the ante, there was going to be a server edition of Data entry, the SPSS Data Entry Enterprise Server. I saw how the confidence of users went down further and further. Yet, the corporation did not sit still in all this and we got to see the Dimensions 2000 part, now that blew us away, we saw software on a whole new level and it was amazing. The 2 programs mrPaper, mrInterview, both truly steps forward, options to format webpages using XML so that the web interview could flawlessly fit in any corporate website. We saw the good days come back and with mrPaper we saw paper interviews with options to link to Readsoft’s scan software, so that data entry was almost a thing of the past, scan the returned interviews and reading the data with a scanner. It was not flawless, but it was really good to see a stage where government sites all over Europe could do quality interviews on many levels. Yet the program had issue as any large program had and there were more issues and they stacked up. Only then was I introduced to Surveycraft. It was an utter shock. Even as it was old, DOS based and looking like the old Data Entry, Surveycraft was miles ahead of mrDimensions. It had working quota’s it had all kinds of options that were ahead of the Quancept software in the UK, it was a shock to be a decade ahead and finding the old software visionary. SPSS had acquired it, and after that the developers managed to get less than 60% of the functionality transferred. Even later when I worked actively with it, I was finding issues that the new software never had, or it worked really badly. So when i tried to emphasize the need for new software to be made as i was no longer part of SPSS, the need for better software was essential, especially in Market Research. They decided not to listen and to believe the SPSS executives that better versions were coming soon, they never came! The entire market research industry was lucky, because other players like Tableau and Q Research software were just like me; they never trusted the SPSS executives and they now corner the market. In this the market research agencies that had the option to push forward decided to wait and basically cut themselves in the fingers and lost on two fronts. With the 2008 crash the markets changed and they lost loads of customers who had to massively trim down, it was a mere effect of events. Yet Tableau and Q-Software were still in a small stage, yet their software was for a much larger audience, so not only did the market research Industry lose customers, the two software programs allowed for mid and larger ranged corporations do it all themselves and that is what happened. Market research companies still get the larger projects, but they lost the smaller stuff, a group of revenue representing near 60% (a personal speculation) and as Tableau and Q-Software grows, the mr market is in more and more peril that is where WPP owning Millward Brown, TNS and IMRB finds itself. It takes a visionary to not merely grow the market, but to spread the options of a market. That ship has now sailed and beyond less than a dozen former SPSS people I worked with, I have merely seen a lack of vision. Some of these market research agencies are now all about ‘telling a story‘, setting the presentation that can in most cases be done with SAP Dashboards and a karaoke system. In this the only part that is still tacky is that when we want to buy the SAP solution (approximately $500) we get to see “Please contact your local SAP account executive for more information on how to buy and implement SAP BusinessObjects Dashboards“, was adding a price that much of a reach?

So as we see the pressures of one branch, we need to see that the overlap is large, even as some are in different territories we know that they are intertwined. Yet this market is also as incestuous as it gets. Lightspeed Research acquires part of Forrester (the Forrester’s Ultimate Consumer Panel business), Forrester is growing in different directions and they are all connected to some degree. There is every chance that the higher echelons will have worked in any combination of SPSS, Forrester, Lightspeed, SPSSmr and ConfirmIT. Likely they already worked in 3 of the five players. Yet the visionary growth has remained absent to a larger degree and digital media is all about evolution and implementing new technologies and new solutions to drive consumer engagement, because the future here is consumer engagement, that alone will get you the data to work with and to set the needs of the industry.

That is the part SPSS as a company ignored and now that we see the shifts, especially in WPP, we see that both Tableau and Q-software have a massive opportunity to grow their market segment even further. The moment they or a third player comes with consumer engagement software, at that point IBM will also feel the pinch, even as it hides behind Watson, options like IBM Statistics (formerly SPSS) and IBM Miner (formerly Clementine, SPSS Data Miner), they get to realise that these two programs also brought new business as the consultants were able to see the needs of the larger customers. When that diminishes, IBM will feel the loss of a lack of visionaries in a very real way. A loss only accelerated by the impacts on WPP and all its subsidiaries. This last part is speculative, but supported with data. As we saw ‘Paul Heath resigns was Ogilvy worldwide chief growth officer and non-executive director of AUNZ‘, we need to realise that the larger insightful players will be seeing more changes. Ogilvy & Mather might be merely the first one, but these people all realise that changes will be different and market shares will change, not all in favour of WPP. We can see “Heath is resigning all his titles at WPP worldwide to return to Brazil to start a new streaming tech venture“, we can read this as a positive: ‘he is going to try something new‘. Or negatively ‘he knows who is on his level at WPP‘ and he has decided that he can grow a nice personal global market share by setting his view on the new player with a promising option for mucho growth. I believe that he is setting his view to become the larger player himself. This is good news as it optionally invigorates the market research market which WPP desperately needs, yet WPP is a lot more than merely market research. It is digital advertising, a field that SPSS (read: IBM) ignored until it was too late, yet when we see some of the services: Branding & identity, Consumer insights, Design, Digital Marketing, Market research, Media planning and buying, Public relations, Relationship marketing’ all valid groups yet there is a lack of options for consumer engagement and several of the other groups are options that many offer, some in niches, some only to midrange players, but effective due to expertise. That should have been a massive red flag and reasons for alarms at WPP, yet not too much was seen there. In all a situation that does not merely warrants the downgrade by Moody’s, the fact that it was averted whilst Sir Martin Sorrell was there as CEO is an actual much larger issue then most identified.

So the problem is not merely who can replace him, but who can alter the course of failed objectives will soon become a much larger issue for WPP, which optionally pushes down the market value by a mere 5%, which considering the 2017 revenue of £15.265 billion becomes an interesting amount.

 

Leave a comment

Filed under Uncategorized

Choosing an inability

This all started last night when a link flashed before my eyes. It had the magical word ‘NHS’ in there and that word works on me like a red cloth on a bull. I believe that there is a lot wrong there and even more needs fixing, it needs to be done. There is no disagreement from anyone. The way to do it that is where the shoes start feeling tight. There are so many sides to fix, the side to start with is not always a given. There will be agreement and disagreement, yet overall most paths when leading to improvement should be fine. There is however one almighty agreement. You see the data analyses side of health care is not that high on the list. Most would agree that knowing certain stuff is nice, but when you have a primary shortage (nurses and doctors) the analyst does not rank that high on the equation. Although I am an analyst myself, I agree to that assessment of the NHS, my need is a lot lower than getting an extra nurse (at present). So when I see ‘Another NHS crisis looms – an inability to analyse data‘ (at https://www.theguardian.com/science/political-science/2017/feb/08/another-nhs-crisis-looms-an-inability-to-analyse-data), I start wondering what actually is going on. The first issue that rises is the author. Beth Simone Noveck is as the Guardian states “the former United States Deputy Chief Technology Officer and Director, White House Open Government Initiative. A professor at New York University“, you see, it is a given that Yanks always have an agenda. Is this about her book ‘Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing‘? Just asking, because the by-line there is: “New tools—what Beth Simone Noveck calls technologies of expertise—are making it possible to match citizen expertise to the demand for it in government. She offers a vision of participatory democracy rooted not in voting or crowdsourcing but in people’s knowledge and know-how“, which seems to match the article. So, is this her sales pitch? You see, she must have missed the memo where the previous labour government wasted £11.2 billion on something that never worked and now as the NHS has plenty of crises moments, spending it on something that limits the growth towards nurses and doctors is a really bad idea.

Then she sets the focus on the HQIP with: “The Healthcare Quality Improvement Partnership (HQIP) conducts forty annual audits comparing hospital and physician outcomes, and the implementation of National Institute of Clinical Excellence standards across England and Wales. But, as HQIP Director Dr Danny Keenan admits, although they have the expertise to do the analysis, “we are woefully inadequate at translating such analysis into improvements. What’s the takeaway for the hospital or community provider Board or the medical director? They cannot understand what they have to do.”“, from that I get that the existence of the HQIP is under discussion because they cannot communicate. This we see in: ‘They cannot understand what they have to do‘, which means that the hospital or community provider Boards or the medical directors are either incompetent or there is a communication issue. I am willing to ‘auto-set’ to: ‘the inability to communicate’. I admit that I would have to read those reports to get a better view, but it is clear that the HQIP has a few cogs missing, which is on them and not on the NHS as such. So if the NHS needs to cut further, that’s where the cutting can start.

Am I against the HQIP? No, of course not, but the NHS has actual problems and putting more resources in communication gaps when a place is running low on gauss and staff the priority seems to be pretty clear. I also accept that if this path is taken that restoration of the NHS will take longer, I get that, but I hope you can agree with me that once the ability to properly aid patients is restored, we can look at the next stage of fixing the NHS, because aiding patients’ needs to be the primary concern for all sides of the NHS.

A second element in the given sales pitch comes from Dr Geraldine Strathdee, where we see “National Mental Health Intelligence Network, together with partners, launched the Fingertips Mental Health data dashboard of common mental health conditions in every locality. Strathdee points out there is a tremendous need for such benchmarking data: to design services based on local need, build community assets, and improve NHS services“, I have stated at a few conferences (mid 90’s) that there is an inherent need to document and create clear paths of internal knowledge retention, which included healthcare, education and government departments. I literally stated “as you grow the knowhow with your own staff members, you will increase their value, they will be better motivated and you create a moment when you become less and less reliant on outside sources, which usually cost a fair amount“, I have been proven correct in more than one way and the lack from some people who saw the gravy train benefit by being aligned with consultants is now at an end and those people tend to not have any allegiance, other than the need to grow their bank account. Creating internal knowledge points has always been a primary need and as this opportunity was wasted, we now see the plea ‘a tremendous need for such benchmarking data‘. They should have listened to some of their IT people a long time ago. The second opposition is seen in “Without it, NHS resourcing is just based on historical allocations, guesswork or the “loudest voice”“. This implies that there has been no proper data collection and reporting for well over 5 years, whilst 10 year gap would sound a little more correct (an assumption from my side). When you look at the Netherlands, there is a long list of reports that psychiatrists and psycho analysts need to adhere to and deliver towards those paying for the services. That has been the case for the longest time. What happens afterwards? Are they not properly collated and reported? In the Netherlands it was and I think it still is (a fact, not verified at present). Yet what happens in the UK? The yank might not know, but I reckon that if the MP’s ask these questions from Dr Geraldine Strathdee that we will get proper responses on what is done now, how it is recorded, reported on and considered for continued improvement. If all of that is absent, who should we talk to? Who needs to give an accountable response?

At that point the doctor becomes a little confusing to me; perhaps that is just me, because when I read “The data dictates investment in early intervention psychosis teams, which dramatically improves outcomes. Fifty per cent of patients get back to education, training or employment. However, there is a shortage of people able to draw these insights“, I just wonder what is set in reports. It is confusing because psychosis is only one of many mental health issues that are in play. When someone gets diagnosed as such a treatment plan comes into focus and as such data had no impact. The patient is either correctly treated or the patient is not. Data had no influence there, it is the carer’s report that is submitted and for that this person will either get the resources needed, or not. Data will not influence this. A report on how many are treated with psychosis is required, but as the reports are handed upwards, those numbers would be known and as such the required needs in medications, staff, treatment plans and of course the required funds to pay for all this would be known. If not, the question becomes: is Professor Noveck there to aid in obscuring events, or should we consider that the National Mental Health Intelligence Network has become redundant and is draining funds needlessly? If you think that this is an exaggerated notion, consider that when we look for the ‘National Mental Health Intelligence Network‘, we get the website (at http://mentalhealthpartnerships.com/network/mental-health-intelligence-network/), the latest thing on their website is a meeting from September 2013, in addition there is something from Professor Chris Cutts on STORM Skills Training and that is May 2014. So I think that the National Mental Health Intelligence Network did get itself involved in a sales pitch and a very poorly constructed one I might add. You see, when we go to Public Health England, we see that there are health Intelligence Networks, but the one they have is called ‘National Mental Health, Dementia and Neurology Intelligence Networks (NMHDNINs)‘, perhaps an oversight from the two sales people? You see the Mental Health Dementia and Neurology path gives us all kinds of information (shallow information I admit), but I wonder if that is wrong or just not the proper place to find it. In addition I see when I look at ‘Severe Mental Illness‘, some 2017 mentions (so it is up to date) with the Psychosis Care Pathway, where I see “The Psychosis Care Pathway provides a high level summary using 16 key indicators to help users assess and benchmark how they manage this important condition. This pathway is consistent with and linked to the Commissioning for Value Psychosis packs to be published by NHS England“, this is an interesting part isn’t it? Does this mean that this is happening, not happening, or more important, what on earth does Dr Geraldine Strathdee think she is doing? Perhaps it is an ill-conceived hostile takeover using an outsider who was published and has a name, whilst the minimum needs to be taken seriously are not even there (an up to date website perhaps). This whilst the mention ‘based at Public Health England‘ is an issue as the Public Health England (at https://www.gov.uk/government/organisations/public-health-england), has no mention at all of the ‘National Mental Health Intelligence Network‘, is that not odd? So what ill-conceived sales pitch are we reading in The Guardian?

Perhaps the quote ‘The NHS needs data analytical talent, which comes from a variety of disciplines‘ gives us that. And as the NHS has no immediate need to hire analysts, see there, the ‘National Mental Health Intelligence Network’ would come to the rescue and save the moment. Perhaps the first thing they would consider is hire a web designer and make sure that the latest INTEL is not 2+ years old (cautious advice from my side). In addition, as it seems that the NHS is likely to be pushed into a ‘we need analytics data‘ conversation (one they can go without at present), not taking the word from a professor and a doctor who dropped the ball might be a first notion to consider. Making a proper inventory of what data the NHS has and seeing if a conversation (a non-invoiced conversation) with someone from Q Research Software is likely to be a hell of a lot more productive than talking to the previous two ‘sales’ people that the Guardian article touches on. I will be honest I had a few issues with that program in the past (for specific reasons) but Q Software has never stopped improving and it has grown to the extent that it is now chiseling to the marginal groups IBM Statistics had and they are now losing those customers to Q Research, which is quite the accomplishment. In that I think it is Dr Danny Keenan who is likely to get the most out of such a meeting. From what the Guardian tells us, we get the implied understanding that he needs the solution to tell a better story. You see, translating statistical results into actions is done through stories. Not fabrications mind you, but a story that helps the receiver understand what direction would be the best to take. The listener will get a few options and each will have a plus and a minus side and usually the one with the best track movement tends to win. If that path includes successfully suppressing the negative elements even more, so much the better.

My main reason for opening this door is because there is enough low level talent in the NHS in several places that might have the ability to do this on the side, a simple path that allows additional reporting whilst not needing to drain essential resources. I call them ‘low level’ not because of anything negative. When working with proper analytics you need to have someone on your back and call with a degree in applied mathematics. Anyone claiming that this is not needed is usually lying to you. In the case of Q, a lot of the calculations have been auto completed and the numbers that are reflecting in the tables still need some level of statistics, but many with a tertiary business degree would have had exposure to a lot more stats than is needed here so as such this person would be low-level only in that regard. It is for all intent and purposes a reporting tool that goes a lot further than mere tabulation and significance levels. It could be the tool of choice for the NHS. Even when they start getting forward momentum, this tool would still be massively useful to them and any change might be limited to getting a dedicated person for this goal. Which with the current shortages all over the NHS is not that far a stretch anyway.

So as we realise what one program can do, we see the questionable approach that the sales person named Beth Noveck is making. The mention “the NHS should expand efforts already underway to construct an NHS Data Lab“, “Improving public institutions with data also requires strong communications, design and visualisation skills. Digital designers are needed who know how to turn raw data into dashboards and other feedback mechanisms, to support managers’ decisions” and “So the NHS needs to be able to tap into a wide range of data analytic know-how, from computer scientists, statisticians, economists, ethicists and social scientists. It is impractical and expensive to meet all of these needs through more hiring. But there are other ways that the NHS can match its demand for data expertise to the supply of knowledgeable talent both within and outside the organization

Three distinct statements which are not false, yet the first one is currently not feasible with the shortages that the NHS has the second one was debunked by me in merely 5 minutes as I introduced Q Research Software to you the reader. Anyone stating that this is not the best solution has a case, but in the shortage world the NHS lives in, with the cost of Q-Software against 93% of all other software solutions, it is the best value for money the NHS could ever lay there fingers on and the third one is even more worrying, because that expensive track of consultants is one of the ways that partially accounts for the £11.2 billion loss that the NHS already suffered. Should the esteemed professor come up with ‘additional considerations’ the NHS should become really scared, because there is a growing concern that some people want to get their fingers on the NHS data, the one treasure the bulk of ALL American healthcare insurers and provides want, because that is one data warehouse they have never been able to properly build.

She ends the article with “Whether the NHS wants to know how to spot the most high-risk patients or where to allocate beds during a particularly cold winter, it can use online networks to find the talent hiding in plain sight, inside and outside the health and social care system“, so how does that work? Where to allocate a bed in cold winter? Are they moved by truck to another location (impeding nurses and doctors as more aid needs to be given at that location), will it require the patient to move, which is actually simply done by finding out where a bed is available. The article is a worrying one, in that light that the article was published and I wonder if it was properly vetted, because there is a difference of many miles between a political science piece and an opinionated sales pitch. So my next step is to take a very critical look at “Smarter Health: Boosting Analytical Capacity at NHS England“, because my spidey sense is tingling and I might find more worrying ammunition in that piece.

 

1 Comment

Filed under IT, Media, Politics, Science

The Dangerous Zuckerberg Classification

Even as Microsoft seems to be quiet and in denial of what is uploaded without consent, we have a second issue that is floating to the surface of our life. Now, first of all, this link is not what we should consider a news site. What came from Forward.com is also known as The Jewish Daily Forward, published by Samuel Norich and has Jane Eisner as the editor. Its origins goes back to 1897, so it has been around for a while. They are not some new wannabe-on-the-block. It is an American newspaper published in New York City for a Jewish-American audience, and there are plenty of those around, so this is a valid niche publication. Yet no more than a day ago, it did something dangerous, perhaps unintentional and perhaps it is a sign of the times, but it remains a dangerous path to take.

This path all started when Mark Zuckerberg had an idea. He created this place called Facebook, you might have heard of it. Within there we get to ‘like’ things. Now, we can do this to complement the poster, we can do this because the subject interests us, or when we use the machine correctly, Facebook would send us more stuff from topics that we like. This already shows three different approaches to ‘like’ and when Forward starts the article with: “Canadian Mosque Shooter Suspect ‘Liked’ Israel Defense Forces, Marine LePen“, it basically shot itself in the foot.

This is part of the problems we are all facing, because the world is changing and it has shifted the values that we have given words over time and shifted them into concepts of what it might be. We see the same shift in the Business Intelligence industry as tools like SPSS (read: IBM Statistics) are no longer used to get the significant statistics needed and the ‘sellers’ of the story that the client wants told rely on tools like Q Software to tell the story that matches the need. The problem is that this story reflects what is offered and from that there is more than one identifier (weight being one) that the reflection is less accurate and often warped to fit the need of the receiver of these data files. Meaning that the actual meaning unlikely to be there, making a correct assessment not possible and any action based upon it, without scrutiny will come at a hefty price for the decision makers down the track.

So when we see “Canadian Mosque Shooter Suspect ‘Liked’ Israel Defense Forces, Marine LePen” we need to be cautious at best, at worst we are being told a fair bit of rubbish! Now we also get “Authorities claim that Alexander Bissonnette, a student at the city’s Laval University, perpetrated the attack, calling in from a bridge near the mosque to report himself“, which could be very true, but it also averts the first signs we see of ‘Lone Wolf‘, because a real lone wolf will go into the night if he or she is lucky without a trace and plans his/her next attack. This one attack person seems to be seeking the limelight as I personally see it. For what reason is at present unknown. Perhaps it is about fame, perhaps the evidence will find evidence of mental health issues. Time and the proper people will need to assess this. We see this in the picture of a tweet by @Rita_Katz when she states ‘making Jihadi ties unlikely‘, which could be true, however I got there via another route. What is interesting is that when we look at the Toronto Star we see “Rosalie Bussieres, 23, lives across the street. She told the Star her older brother was in school with Bissonnette. He was “very solitary” and “very antisocial,” said Bussieres. Bissonnette studied at the Université Laval, according to a statement released by the university late Monday. He was a student in the department of political science and anthropology, according to Jean-Claude Dufour, Dean of the Faculty of Agriculture and Food Sciences

This is interesting as those in political science tend to be decently social minded, so there is a lot more under the water than we think there is and the fact that Forward only gave us the likes, means that there is a part that they either ignored or overlooked. You see, what else did his Facebook account have to say?

The Toronto Star gives us a lot more “He was on both the Sainte-Foy and Université Laval chess club“, with Forward we got more on Rita Katz. “Rita Katz is the Executive Director and founder of the SITE Intelligence Group” is one, and the next part is the one we should consider: “the world’s leading non-governmental counterterrorism organization“, as well as “Ms. Katz has tracked and analyzed global terrorism and jihadi networks for nearly two decades, and is well-recognized as one of the most knowledgeable and reliable experts in the field“. Which makes me wonder why it is the Toronto Star who gives us the part I did not initially showed “with his twin brother, said Université Laval professor Jean Sévigny, who said he knew Bissonnette and his brother through the club“. So how come The Forward didn’t have the goods on that?

Yet they did give us “François Deschamps, member of Quebec’s Refugee Welcome Committee, told the La Presse newspaper that he recognized Bissonette because the man had often left hateful comments on the group’s page. “I flipped when I saw him,” he said. “We observe much of what the extreme right says and does. He’s made statements of that sort on our Facebook page. He also attacked women’s rights,” Deschamps recalled“. The full story is at http://forward.com/news/361614/canadian-mosque-shooter-suspect-liked-israel-defense-forces-marine-lepen/

So as we are invited to judge on likes, I see a hole of intelligence. How many friends? How many clubs? Was he linked to Chess groups? Was he linked to his Twin Brother, and was his twin brother on Facebook? There is no one mentioning whether the twin brother was reached and what he had to say (if he had been willing to talk), which he might not be willing to do and that is perfectly understandable. It is just such a weird experience to see a total lack of effort in that regard (especially by the press).

Forward is telling its readers a story, yet the Toronto Star (at https://www.thestar.com/news/canada/2017/01/30/six-dead-two-arrested-after-shooting-at-quebec-city-mosque.html) seems to offer a lot more. In that view ABC news in Australia blunders (as I personally see it) even more when we see (at http://www.abc.net.au/news/2017-01-31/quebec-city-mosque-shooting-lone-wolf-attack-student-charged/8225294), ‘Police charge ‘lone wolf’ student suspected of terrorist attack‘, so what evidence is there? What is the definition of a Lone Wolf? Perhaps we need to agree on the shifting sands and make sure it is sand and not quicksand. They both might contain the same 4 letters, but the experience will be mind-bogglingly different.

So as we now see that the US is using this attack to justify its actions, we need to take heed on the dangers we invite. The first is like the attack in Sydney, Australia at Martin Place, on December 15-16 2014. We again see a link to extremism that is incorrect and misleading. Yes, the act was extreme, but we have seen for decades on how mental health patients are very able to act in extreme ways. You only need to see the footage from Paris attacks to see how actions in places like Nairobi and Paris to clearly see that they are different from events in places like Martin Place and perhaps the Quebec Mosque.

We can argue on how correct the FBI setting is, yet it is an important one! “Terrorism is the unlawful use of force and violence against persons or property to intimidate or coerce a government, the civilian population, or any segment thereof, in furtherance of political or social objectives“. So what were the social and political objectives of Alexander Bissonnette?

There is a lot we don’t know and won’t know. Yet at present Forward is presenting the dangers that social media rely on, they rely on quick and classifiable actions and label them in the most general way possible. The dangers that we see in the Zuckerberg classification is that it relies on the quick acceptance of the ‘audience’ yet in the same way the danger is that the ‘like’ itself becomes a problem. You see, too many elements are about specifics and as we see less and less, we see that people in general will start to rely on an aggregation of ‘reportable elements’, not even on an aggregation of facts.

Heavy.com, another place that is not really a news site gives us a whole range of additional ‘facts’. They refer to Reuters, who reported (at http://www.reuters.com/article/us-canada-mosque-shooting-idUSKBN15E04S), where we get “Initially, the mosque president said five people were killed and a witness said up to three gunmen had fired on about 40 people inside the Quebec City Islamic Cultural Centre. Police said only two people were involved in the attack“, in that part the Lone Wolf no longer applies and it is either ‘lone Wolves’ or something else. Forward however gave us “Police investigating the shooting at a Quebec mosque that killed six have narrowed down their list of suspects to one man” Yet 5 hours after the initial message Reuters (at http://www.reuters.com/article/us-canada-mosque-shooting-toll-idUSKBN15E0F6) gives us “Police declined to discuss possible motives for the shooting at the Centre Culturel Islamique de Québec. They consider this a lone wolf situation,” a Canadian source familiar with the situation said“, which is a statement that should be under some scrutiny to say the least.

All this links to an event one year ago, which was covered in the Tech Times, where we see ‘Sheryl Sandberg Sees Facebook Likes As Powerful Weapon Against ISIS, Other Extremists‘ with the quote “Rather than scream and protest, they got 100,000 people to Like the page, who did not Like the page and put messages of tolerance on the page, so when you got to the page, it changed the content and what was a page filled with hatred and intolerance was then tolerance and messages of hope“. This is now a linked issue. You see the part ‘they got 100,000 people to Like the page, who did not Like the page‘, this implies that data was intervened with, so if that is happening, how reliable was the ‘like’ part in Forward.com?

The fact that papers all over the place are trying to ‘cash’ in on this by adding a page with ‘the latest facts‘ or ‘what we know at present‘, like The Globe and Mail, whilst showing an avalanche of news on the matter. Actually, the page The Globe and Mail brought was pretty good. It is Heavy.com who does something similar, yet at that point they move into the ‘5 things you need to know‘ mode and give us a stream of links. Links to classmates and how they thought. Yet, are these facts correct and complete? Heavy links to the Globe and Mail, and in addition gives us the part we needed to hear: “He also likes U.S. Senator John McCain, a moderate Republican who has opposed Trump on some issues, President George W. Bush, the Canadian New Democratic Party and late Canadian politician Jack Layton, who was a leader of the left-wing NDP, so the likes do not shed much light on Bissonnette’s beliefs“, Forward.com, and as such linked SITE Intelligence Group had nothing on any of that in the article. So anyone relying on Forward is now missing out of essential facts. In equal measure, the fact that many of these items are not voiced by other papers make the statements of Heavy.com equally an issue until confirmed.

And finally there is the impact of how the like was obtained. Plenty of sources started with a few ‘like to win’ campaigns. How many people have clicked on a like and forgot about doing so? Yet in this light, the ‘like’ is implied to have a much larger impact, much larger than the user considers or even comprehends. The places using those likes for telling a story have left that concept behind, giving us unclean and incorrect data, which now implies that any conclusion based on it is pretty much useless.

Be aware, I am not stating, or accusing these posters of fake news, yet there is the option that some will see it as such. As I stated at the beginning regarding Forward.com, their origin goes back to 1897, which means that they have been around for some time. So why were so many facts missed and why did Forward link this suspect to both the Israel Defense Forces and Marine LePen, especially in light of what others reported?

What is not related to the Facebook side is the news that the initial news of two shooters (up to three) is now reduced to just the one. When a witness states up to three, there is a clarity to assume (to some degree) that there was more than one shooter (which is a speculation from my side). So what happened to the second one? Just be aware that there might just have been one shooter, yet the documentation we are seeing implies more than one.

So how is this a Zuckerberg thing?

Well, apart from him inventing Facebook and bringing about the evolution of Social media, his ‘like’ is almost like his ‘poke’, they are Social media tools, yet the value the users tend to give it is different, it is even debatable whether the users at large could ever agree on the usage of it, making it a transient value. A shifted number whilst the contemplators cannot agree how the value is to be used, so the usage of ‘like’ in the way it was used in by the press becomes a debate as well. Because what we like implies where we are. That is not a given, even better it is incomplete. You see, you can state your like, but as you cannot state a dislike, we end up having no real comparison. It is the old debate of Yes and No dichotomies, if you did not say ‘yes’, there is no validity that you stated ‘no’, because it might have been overlooked, or it was the fourth option in a list of three. There is a decent abundance of reasons to take that point of view.

fox_poll

Let me show this in another way. The Fox poll of the Refugee Ban (see image). We see the full story at http://insider.foxnews.com/2017/01/29/poll-nearly-half-america-voters-support-trumps-immigration-order, but what we do not see are the specifics on what would have given this value. You see, we do not know the number of responses, where it was done and when it was done. It is at https://poll.qu.edu/ that we learn parts of the facts, “From January 5 – 9, Quinnipiac University surveyed 899 voters nationwide with a margin of error of +/- 3.3 percentage points“, can anyone explain to me how Fox was so stupid to use a base of 899 to set a national value? Doesn’t the United States have around 320 million people? And as we realise that there 50 states, how can 18 people be significant on a view in state, and this is before we consider whether the use of gender was normalised, because men and women tend to feel different on emotional issues and is there is one element in abundance on issues concerning refugees it will be emotion.

 

So in all this, we see recurring waves of generalisation and trivialisation. Mark Zuckerberg is not to blame, but he is a factor. In addition there is an overwhelming lack in educating its customer base (by both Fox and Facebook), so we need to consider the dangers and well as the irrelevance of these ‘revelations‘. It is in this scope and in the application as seen used where classification becomes dangerous and a danger, because how will the people around a person react when they see that this person likes something people find offensive (and that is when we keep it to simple things like actors, actresses and politicians)? This will impact on the like as there will be peer pressure, so how can this Zuckerberg element be undermined? That is the actual question!

Is it as simple as condemning the press for using the fact? Is it as simple as giving out complete information? The Zuckerberg Classifications are here to stay, there is nothing against it and the fact that they are is in no way negative, but the usage of it leaves a lot to be desired and as such it is a misleading one, other than ‘this person clicked on the like button of this page, for reasons unknown’, giving it any more value is as meaningless as setting the national acceptance of a refugee ban based on 899 unquantifiable votes which represents at best 0.00028% of the United States population. If any vote was incorrectly vetted, the number will go down fast making the poll even more useless.

 

Leave a comment

Filed under Media, Politics, Science

Room for Requirement

I looked at a few issues 3 days ago. I voiced them in my blog ‘The Right Tone‘ (at https://lawlordtobe.com/2016/09/21/the-right-tone/), one day later we see ‘MI6 to recruit hundreds more staff in response to digital technology‘ (at https://www.theguardian.com/uk-news/2016/sep/21/mi6-recruit-digital-internet-social-media), what is interesting here is the quote “The information revolution fundamentally changes our operating environment. In five years’ time there will be two sorts of intelligence services: those that understand this fact and have prospered, and those that don’t and haven’t. And I’m determined that MI6 will be in the former category“, now compare it to the statement I had made one day earlier “The intelligence community needs a new kind of technological solution that is set on a different premise. Not just who is possibly guilty, but the ability of aggregation of data flags, where not to waste resources“, which is just one of many sides needed. Alex Younger also said: “Our opponents, who are unconstrained by conditions of lawfulness or proportionality, can use these capabilities to gain increasing visibility of our activities which means that we have to completely change the way that we do stuff”, I reckon the American expression: ‘He ain’t whistling Dixie‘ applies.

You see, the issue goes deeper than mere approach, the issue at hand is technology. The technology needs to change and the way data is handled requires evolution. I have been in the data field since the late 80’s and this field hasn’t changed too much. Let’s face it, parsing data is not a field that has seen too much evolving, for the mere reason that parsing is parsing and that is all about speed. So to put it on a different vehicle. We are entering an age where the intelligence community is about the haulage of data, yet in all this, it is the container itself that grows whilst the haulage is on route. So we need to find alternative matters to deal with the container content whilst on route.

Consider the data premise: ‘If data that needs processing grows by 500 man years of work on a daily basis‘, we have to either process smarter, create a more solutions to process, be smarter on what and how to process, or change the premise of time. Now let’s take another look. For this let’s take a look at a game, the game ‘No Man’s Sky’. This is not about gaming, but about the design. For decades games were drawn and loaded. A map, with its data map (quite literally so). Usually the largest part of the entire game. 11 people decided to use a formula to procedurally generate 18 quintillion planets. They created a formula to map the universe with planets, planet sized. This has never been done before! This is an important part. He turned it all around and moreover, he is sitting on a solution that is worth millions, it could even be worth billions. The reason to use this example is because games are usually the first field where the edge of hardware options are surpassed, broken and redesigned (and there is more at the end of this article). Issues that require addressing in the data field too.

Yet what approach would work?

That is pretty much the ‎£1 billion question. Consider the following situation: Data is being collected non-stop, minute by minute. Set into all kinds of data repositories. Now let’s have a fictive case. The chatter gives that in 72 hours an attack will take place, somewhere in the UK. It gives us the premise:

  1. Who
  2. Where
  3. How

Now consider the data. If we have all the phone records, who has been contacting who, through what methods and when? You see, it isn’t about the data, it is about linking collections from different sources and finding the right needle, that whilst the location, shape and size of the haystack are an unknown. Now, let’s say that the terrorist was really stupid and that number is known. So now we have to get a list of all the numbers that this phone had dialled. Then we get the task of linking the information on these people (when they are not pre-paid or burner phones). Next is the task of getting a profile, contacts, places, and other information. The list goes on and the complexity isn’t just the data, the fact that actual terrorists are not dumb and usually massively paranoid, so there is a limit to the data available.

Now what if this was not reactive, but proactive?

What if the data from all the sources could be linked? Social media, e-mail, connections, forums and that is just the directly stored data. When we add mobile devices, Smartphones, tablets and laptops, there is a massive amount of additional data that becomes available and the amount of data from those sources are growing at an alarming rate. The challenge is to correctly link the data from sources, with added data sources that contain aggregated data. So, how do you connect these different sources? I am not talking about the usage, it is about the impaired data on different foundations with no way to tell whether pairing leads to anything. For this I need to head towards a 2012 article by Hsinchun Chen (attached at end), Apart from the clarity that we see in the BI&A overview (Evolution, Application and Emerging Research), the interesting part that even when we just look at it from a BI point of view, we see two paths missing. That is, they seem to be missing now, if we look back to 2010-2011, the fact that Google and Apple grew a market in excess of 100% quarter on quarter was not to be anticipated to that degree. The image on page 1167 has Big Data Analytics and Mobile Analytics, yet Predictive Interactivity and Mobile Predictive Analytics were not part of the map, even though the growth of Predictive Analytics have been part of BI from 2005 onwards. Just in case you were wondering, I did not change subject, the software need that part of the Intelligence world uses comes from the business part. A company usually sees a lot more business from 23 million global companies than it gets from 23 intelligence agencies. The BI part is often much easier to see and track whilst both needs are served. We see a shift of it all when we look at the table on page 1169. BI&A 3.0 now gets us the Gartner Hype Cycle with the Key Characteristics:

  1. Location-aware analysis
  2. Person-centred analysis
  3. Context-relevant analysis
  4. Mobile visualization & HCI

This is where we see the jump when we relate to places like Palantir that is now in the weeds prepping for war. Tech Crunch (at https://techcrunch.com/2016/06/24/why-a-palantir-ipo-might-not-be-far-off/) mentioned in June that it had taken certain steps and had been preparing for an IPO. I cannot say how deep that part was, yet when we line up a few parts we see an incomplete story. The headline in July was: ‘Palantir sues investor Marc Abramowitz for allegedly stealing company secrets‘, I think the story goes a little further than that. It is my personal belief that Palantir has figured something out. That part was seen 3 days ago (at http://www.defensenews.com/articles/dcgs-commentary), the two quotes that matter are “The Army’s Distributed Common Ground System (DCGS) is proof of this fact. For the better part of the last decade, the Army has struggled to build DCGS from the ground up as the primary intelligence tool for soldiers on the battlefield. As an overarching enterprise, DCGS is a legitimate and worthwhile endeavour, intended to compute and store massive amounts of data and deliver information in real time“, which gives us (actually just you the reader) the background, whilst “What the Army has created, although well-intentioned, is a sluggish system that is difficult to use, layered with complications and unable to sustain the constant demands of intelligence analysts and soldiers in combat. The cost to taxpayers has been approximated at $4 billion“, gives us the realistic scope and that all links back to the Intelligence Community. I think that someone at Palantir has worked out a few complications making their product the one winning solution. When I started to look into the matter, some parts did not make sense, even if we take the third statement (which I was already aware of long before this year “In legal testimony, an Army official acknowledged giving a reporter a “negative” and “not scientific” document about Palantir’s capabilities that was written by a staff member but formatted to appear like a report from the International Security Assistance Force. That same official stated that the document was not based on scientific data“, it would not have added up. What does add up (remember, the next part is speculative), the data links required in the beginning of the article, have to a larger extent been resolved by the Palantir engineers. In its foundation, what the journal refers to as BI&A 3.0 has been resolved by Palantir (top some extent). If true, we will get a massive market shift. To make a comparison, Google Analytics might be regarded as MSDOS and this new solution makes Palantir the new SE-Linux edition, the difference on this element could be that big. The difference would be that great. And I can tell you that Google Analytics is big. Palantir got the puzzle piece making its value go up with billions. They could raise their value from 20 billion to 60-80 billion, because IBM has never worked out that part of analytics (whatever they claim to have is utterly inferior) and Google does have a mobile analytics part, but limited merely as it is for a very different market. There have always been issues with the DCGS-A system (apart from it being as cumbersome as a 1990 SAS mainframe edition), so it seems to me that Palantir could not make the deeper jump into government contracts until it got the proper references and showing it was intentionally kept out of the loop is also evidence that could help. That part was recently confirmed by US Defense News.

In addition there is the acceptance of Palantir Gotham, which offered 30% more work with the same staff levels and Palantir apparantly delivered, which is a massive point that the Intelligence groups are dealing with, the lack of resources. The job has allowed NY City to crack down on illegal AirBnB rentals. A task that requires to connect multiple systems and data that was never designed to link together. This now gets us to the part that matters, the implication is that the Gotham Core would allow for dealing with the Digital data groups like Tablet, mobile and streaming data from internet sites.

When we combine the information (still making it highly speculative) the fact that one Congressman crossed the bridge (Duncan Hunter R-CA), many could follow. That part matters as Palantir can only grow the solution if it is seen as the serious solution within the US government. The alleged false statements the army made (as seen in Defence News at http://www.defensenews.com/articles/dcgs-commentary) with I personally believe was done to keep in the shadows that DCGS-A was not the big success some claimed it to be, will impact it all.

And this now links to the mentions I made with the Academic paper when we look at page 1174, regarding the Emerging Research for Mobile Analytics. The options:

  1. Mobile Pervasive Apps
  2. Mobile Sensing Apps
  3. Mobile Social Networking
  4. Mobile Visualization/HCI
  5. Personalization and Behavioural Modelling

Parts that are a given, and the big players have some sort of top line reporting, but if I am correct and it is indeed the case that Palantir has figured a few things out, they are now sitting on the mother lode, because there is currently nothing that can do any of it anywhere close to real-time. Should this be true, Palantir would end being the only player in town in that field, an advantage corporations haven’t had to this extent since the late 80’s. The approach SPSS used to have before they decided to cater to the smallest iteration of ‘acceptable’ and now as IBM Statistics, they really haven’t moved forward that much.

Now let’s face it, these are all consumer solutions, yet Palantir has a finance option which is now interesting as Intelligence Online reported a little over a week ago: “The joint venture between Palantir and Credit Suisse has hired a number of former interception and financial intelligence officials“, meaning that the financial intelligence industry is getting its own hunters to deal with, if any of those greedy jackals have been getting there deals via their iPhone, they will be lighting up like a Christmas tree on those data sets. So in 2017, the finance/business section of newspapers should be fun to watch!

The fact that those other players are now getting a new threat with actual working solutions should hurt plenty too, especially in the lost revenue section of their spreadsheet.

In final part, why did I make the No Man’s Sky reference? You see, that is part of it all. As stated earlier, it used a formula to create a planet sized planet. Which is one side of the equation. Yet, the algorithm could be reversed. There is nothing stopping the makers to scan a map and get us a formula that creates that map. For the gaming industry it would be forth a fortune. However, that application could go a lot further. What if the Geospatial Data is not a fictive map, but an actual one? What if one of the trees are not trees but mobile users and the other type of trees are networking nodes? It would be the first move of setting Geospatial Data in a framework of personalised behavioural modelling against a predictive framework. Now, there is no way that we know where the person would go, yet this would be a massive first step in answering ‘who not to look for‘ and ‘where not to look‘, diminishing a resource drain to say the least.

It would be a game changer for non-gamers!

special_issue_business_intelligence_rese

 

Leave a comment

Filed under Finance, IT, Military, Politics, Science