Tag Archives: the Guardian

Telstra, NATO and the USA

There are three events happening, three events that made the limelight. Only two seem to have a clear connection, yet that is not true, they all link, although not in the way you might think.

Telstra Calling

The Guardian (at https://www.theguardian.com/business/2018/jun/20/telstra-to-cut-8000-jobs-in-major-restructure) starts with ‘Telstra to cut 8,000 jobs in major restructure‘. Larger players will restructure in one way or another at some point, and it seems that Telstra is going through the same phase my old company went through 20 years ago. The reason is simple and even as it is not stated as such, it boils down to a simple ‘too many captains on one ship‘. So cut the chaff and go on. It also means that Telstra would be able to hire a much stronger customer service and customer support division. Basically, it can cut the overhead and they can proclaim that they worked on the ‘costing’ side of the corporation. It is one way to think. Yet when we see: “It plans to split its infrastructure assets into a new wholly owned business unit in preparation for a potential demerger, or the entry of a strategic investor, in a post-national broadband network rollout world. The new business unit will be called InfraCo“. That is not a reorganisation that is pushing the bad debts and bad mortgages out of the corporation and let it (optionally) collapse. The congestion of the NBN alone warrants such a move, but in reality, the entire NBN mess was delayed for half a decade, whilst relying on technology from the previous generation. With 5G coming closer and closer Telstra needs to make moves and set new goals, it cannot do that without a much better customer service and a decently sized customer support division, from there on the consultants will be highly needed, so the new hiring spree will come at some stage. The ARNnet quote from last month: “Shares of Australia’s largest telco operator Telstra (ASX:TLS) tumbled to their lowest in nearly seven years on 22 May, after the firm was hit by a second major mobile network service outage in the space of a month“, does not come close to the havoc they face, it is not often where one party pisses off the shareholders, the stakeholders and the advertisers in one go, but Telstra pulled it off!

A mere software fault was blamed. This implies that the testing and Q&A stage has issues too, if there is going to be a Telstra 5G, that is not a message you want to broadcast. The problem is that even as some say that Telstra is beginning to roll out 5G now, we am afraid that those people are about to be less happy soon thereafter. You see, Telstra did this before with 4G, which was basically 3.5G, now we see the Business Insider give us ‘Telstra will roll out 2Gbps speeds across Australian CBDs within months‘, but 2Gbps and 10Gbps are not the same, one is merely 20%, so there! Oh, and in case you forgot the previous part. It was news in 2011 when ABC gave us (at http://www.abc.net.au/technology/articles/2011/09/28/3327530.htm) “It’s worth pointing out that that what Telstra is calling 4G isn’t 4G at all. What Telstra has deployed is 1800MHz LTE or 3GPP LTE that at a specification level should cap out at a download speed of 100Mb/s and upload speed of 50Mbps [ed: and the public wonders why we can’t just call it 4G?]. Telstra’s sensibly not even claiming those figures, but a properly-certified solution that can actually lay claim to a 4G label should be capable of downloads at 1 gigabit per second; that’s the official 4G variant known as LTE-A. Telstra’s equipment should be upgradeable to LTE-A at a later date, but for now what it’s actually selling under a ‘4G’ label is more like 3.7-3.8G. “3.7ish G” doesn’t sound anywhere near as impressive on an advertising billboard, though, so Telstra 4G it is“, which reflects the words of Jeremy Irons in Margin Call when he states: “You can be the best, you can be first or you can cheat“. I personally think that Telstra is basically doing what they did as reported in 2011 and they will market it as ‘5G’, giving premise to two of the elements that Jeremy Irons mentioned.

This now gives a different visibility to the SMH article last week (at https://www.smh.com.au/business/companies/how-a-huawei-5g-ban-is-about-more-than-espionage-20180614-p4zlhf.html), where we see “The expected ban of controversial Chinese equipment maker Huawei from 5G mobile networks in Australia on fears of espionage reads like a plot point from a John le Carre novel. But the decision will have an impact on Australia’s $40 billion a year telecoms market – potentially hurting Telstra’s rivals“, as well as “The Sydney Morning Herald and The Age reported in March that there were serious concerns within the Turnbull government about Huawei’s potential role in 5G – a new wireless standard that could be up to 10 times as powerful as existing mobile services, and used to power internet connections for a range of consumer devices beyond phones“, you see I do not read it like that. From my point of view I see “There are fears within the inner circle of Telstra friends that Huawei who is expected to offer actual 5G capability will hurt Telstra as they are not ready to offer anything near those capabilities. The interconnectivity that 5G offers cannot be done in the currently upgradable Telstra setting of a mere 2bps, which is 20% of what is required. Leaving the Telstra customers outside of the full range of options in the IoT in the near future, which will cost them loads of bonus and income opportunities“. This gives two parts, apart from Optus getting a much larger slice of the cake, the setting is not merely that the consumers and 5G oriented business is missing out, private firms can only move forward to the speed that Telstra dictates. So who elected Telstra as techno rulers? As for the entire Huawei being “accused of spying by lawmakers in the US“, is still unfounded as up to now no actual evidence has been provided by anyone, whilst at the same speed only a week ago, the Guardian gave us ‘Apple to close iPhone security gap police use to collect evidence‘, giving a clear notion that in the US, the police and FBI were in a stage where they were “allowed to obtain personal information from locked iPhones without a password, a change that will thwart law enforcement agencies that have been exploiting the vulnerability to collect evidence in criminal investigations“, which basically states that the US were spying on US citizens and people with an iPhone all along (or at least for the longest of times). It is a smudgy setting of the pot calling the kettle a tea muffler.

The fact that we are faced with this and we prefer to be spied on through a phone 50% cheaper is not the worst idea. In the end, data will be collected, it is merely adhering to the US fears that there is a stronger setting that all the collected data is no longer in the US, but in places where the US no longer has access. That seems to be the setting we are confronted with and it has always been the setting of Malcolm Turnbull to cater to the Americans as much as possible, yet in this case, how exactly does Australia profit? I am not talking about the 37 high and mighty Telstra ‘friends’. I am talking about the 24,132,557 other Australians on this Island, what about their needs? If only to allow them than to merely get by on paying bills and buying food.

Short term and short sighted

This gets us to something only thinly related, when we see the US situation in ‘Nato chief warns over future of transatlantic relationship‘. The news (at https://www.theguardian.com/world/2018/jun/19/transatlantic-relationship-at-risk-says-nato-chief) has actually two sides, the US side and the side of NATO. NATO is worried on being able to function at all. It is levied up to the forehead in debts and if they come to fruition, and it will they all drown and that requires the 27 block nation to drastically reduce defence spending. It is already trying to tailor a European defence force which is a logistical nightmare 6 ways from Sunday and that is before many realise that the communication standards tend to be a taste of ‘very nationally’ standard and not much beyond that point. In that regard the US was clever with some of their ITT solutions in 1978-1983. Their corn flaky phones (a Kellogg joke) worked quite well and they lasted a decent amount of time. In Europe, most nations were bound to the local provider act and as such there were all kinds of issues and they all had their own little issues. So even as we read: “Since the alliance was created almost 70 years ago, the people of Europe and North America have enjoyed an unprecedented period of peace and prosperity. But, at the political level, the ties which bind us are under strain“, yup that sounds nice, but the alliances are under strain by how Wall Street thinks the funding needs to go and Defence is not their first priority, greed is in charge, plain and simple. Now, to be fair, on the US side, their long term commitment to defence spending has been over the top and the decade following September 11 2001 did not help. The spending went from 10% of GDP up to almost 20% of GDP between 2001 and 2010. It is currently at about 12%, yet this number is dangerous as the economy collapsed in 2008, so it basically went from $60 billion to $150 billion, which hampered the infrastructure to no end. In addition we get the splashing towards intelligence consultants (former employees, who got 350% more when they turned private), so that expenditure became also an issue, after that we see a whole range of data gathering solutions from the verbose (and not too user friendly) MIIDS/IDB.

In CONUS (or as you might understand more clearly the contiguous United 48 States; without Alaska and Hawaii), the US Army Forces Command (FORSCOM) Automated Intelligence Support Activity (FAISA) at Fort Bragg, NC, has access to the MIIDS and IDB by tactical users of the ASAS, and they maintain a complete copy of DIA’s MIIDS and IDB and update file transactions in order to support the tactical user. So there are two systems (actually there are more) and when we realise that the initial ASAS Block I software does not allow for direct access from ASAS to the FAISA System. So, to accomplish file transfer of MIIDS and IDB files, we are introduced to a whole range of resources to get to the data, the unit will need an intermediate host(s) on the LAN that will do the job. In most cases, support personnel will accomplish all the file transfers for the unit requesting that intel. Now consider 27 national defence forces, one European one and none of them has a clue how to get one to the other. I am willing to wager $50 that it will take less than 10 updates for data to mismatch and turn the FAISA system into a FAUDA (Arabic for chaos) storage system, with every update taking more and more time until the update surpasses the operational timeframe. That is ample and to the point as there is a growing concern to have better ties with both Israel and Saudi Arabia, what a lovely nightmare for the NSA as it receives (optionally on a daily basis) 9 updates all containing partially the same data (Army-Navy, Army-Air force, Army-Marines, Navy-Air force, Navy-Marines, Air force-Marines, DIA, DHS and Faisa HQ). Yes, that is one way to keep loads of people employed, the cleaning and vetting of data could require an additional 350 hours a day in people to get the vetting done between updates and packages. In all this we might see how it is about needing each other, yet the clarity for the US is mostly “Of the 29 Nato members, only eight, including the US and the UK, spend more than 2% of their GDP on defence, a threshold that the alliance agreed should be met by all the countries by 2024. Germany spent €37bn (£32.5bn), or 1.2% of GDP, on defence last year“, it amounts to the US dumping billions in an area where 28 members seem to have lost the ability to agree to standards and talk straight to one another (a France vs Germany pun). In all this there is a larger issue, but we will now see that in part three

Sometimes a cigar is an opportunity

you see, some saw the “‘Commie cadet’ who wore Che Guevara T-shirt kicked out of US army” as an issue instead of an opportunity. The article (at https://www.theguardian.com/us-news/2018/jun/19/west-point-commie-cadet-us-army-socialist-views-red-flags) gives light to some sides, but not to the option that the US basically threw out of the window. You see the Bill of rights, a mere piece of parchment that got doodled in 1789 offering things like ‘freedom to join a political party‘, as we see the setting at present. The issue as I see it is the overwhelming hatred of Russia that is in play. Instead of sacking the man, the US had an opportunity to use him to see if a dialogue with Cuba could grow into something stronger and better over time. It might work, it might not, but at least there is one person who had the option to be the messenger between Cuba and the US and that went out of the window in a heartbeat. So when we see: “Spenser Rapone said an investigation found he went online to advocate for a socialist revolution and disparage high-ranking officers and US officials. The army said in a statement only that it conducted a full investigation and “appropriate action was taken”“. Was there a full investigation? To set this in a proper light, we need to look at NBC (at https://www.nbcnews.com/news/us-news/sexual-assault-reports-u-s-military-reach-record-high-pentagon-n753566), where we see: “Service members reported 6,172 cases of sexual assault in 2016 compared to 6,082 last year, an annual military report showed. This was a sharp jump from 2012 when 3,604 cases were reported“, we all should realise that the US defence forces have issues, a few a hell of a lot bigger than a person with a Che Guevara T-Shirt. So when we ask for the full investigations reports of 6172 cases, how many have been really investigated, or prosecuted on? NBC reported that “58 percent of victims experienced reprisals or retaliation for reporting sexual assault“, so how exactly were issues resolved?

Here we see the three events come together. There is a flawed mindset at work, it is flawed through what some might call deceptive conduct. We seem to labels and when it backfires we tend to see messages like ‘there were miscommunications hampering the issues at hand‘, standards that cannot be agreed on, or after there was an agreement the individual players decide to upgrade their national documents and hinder progress. How is that ever going to resolve issues? In all this greed and political needs seem to hinder other avenues though players that should not even be allowed to have a choice in the matter. It is the setting where for close to decades the politicians have painted themselves into a corner and are no longer able to function until a complete overhaul is made and that is the problem, a solution like that costs a serious amount of funds, funds that are not available, not in the US and not in Europe. The defence spending that cannot happen, the technology that is not what is specified and marketing will merely label it into something that it is not, because it is easier to sell that way. A failing on more than one level and by the time we are all up to speed, the others (read: Huawei) passed us by because they remained on the ball towards the required goal.

So as we are treated to: “A parliamentary hearing in Sydney got an extra touch of spice yesterday, after the chief executive of NBN Co appeared to finger one group of users supposedly responsible for congestion on NBN’s fixed wireless network: gamers“, whilst the direct setting given is “Online gaming requires hardly any bandwidth ~10+ megabytes per hour. A 720p video file requires ~ 500+ megabytes per hour. One user watching a YouTube video occupies the same bandwidth as ~50 video gamers“, we can argue who is correct, yet we forgot about option 3. As was stated last week we see that the largest two users of online games were Counterstrike (250MB/hour) add Destiny 2 (300 MB/hour), whilst the smallest TV watcher ABC iView used the same as Destiny 2, the rest a multitude of that, with Netflix 4K using up to 1000% of what gamers used (in addition to the fact that there are now well over 7.5 million Netflix users, whilst the usage implies that to be on par, we need 75 million gamers, three times the Australian population). Perhaps it is not the gamers, but a system that was badly designed from the start. Political interference in technology has been a detrimental setting in the US, Europe and Australia as well, the fact that politicians decide on ‘what is safe‘ is a larger issue when you put the issues next to one another. If we openly demand that the US reveal the security danger that Huawei is according to them, will they remain silent and let a ‘prominent friend‘ of Telstra speak?

When we look one tier deeper into NATO, they themselves become the source (at https://www.nato-pa.int/document/2018-defence-innovation-capitalising-natos-science-and-technology-base-draft-report) with: ‘Capitalising on Nato’s Science and Technology Base‘. Here we see on page 5: “In an Alliance of sovereign states, the primary responsibility to maintain a robust defence S&T base and to discover, develop and adopt cutting-edge defence technologies lies with NATO member states themselves. Part of the answer lies in sufficient defence S&T and R&D budgets“. It is the part where we see: ‘adopt cutting-edge defence technologies lies with NATO member states themselves‘ as well as ‘sufficient defence S&T and R&D budgets‘. You introduce me to a person that shows a clear partnership between the needs of Philips (Netherlands) and Siemens (Germany) and I will introduce you to a person who is knowingly miscommunicating the hell out of the issue. You only need to see the 2016 financial assessment: “After divesting most of its former businesses, Philips today has a unique portfolio around healthy lifestyle and hospital solutions. Unlike competitors like GE Healthcare and Siemens Healthineers, the company covers the entire health continuum” and that is merely one field.

Rubber Duck closing in on small Destroyer.

In that consider a military equivalent. The 5th best registered CIWS solution called MK15 Phalanx (US), the 3rd position is for the Dutch Goalkeeper (Thales Netherlands) and the 2nd best CIWS solution comes from the US with the Raytheon SeaRAM. Now we would expect every nationality would have its own solution, yet we see the SeaRAM was only adopted by Germany, why is it not found in the French, Italian, Spanish and Canadian navy? Belgium has the valid excuse that the system is too large for their RIB and Dinghy fleet, but they are alone there. If there is to be true connectivity and shared values, why is this not a much better and better set partnership? Now, I get that the Dutch are a proud of their solution, yet in that entire top list of CIWS systems, a larger group of NATO members have nothing to that degree at all. So is capitalising in the title of the NATO paper actually set to ‘gain advantage from‘, or is it ‘provide (someone) with capital‘? Both are options and the outcome as well as the viability of the situation depending on which path you take. So are the Australians losing advantage from Telstra over Huawei, or are some people gaining huge lifestyle upgrades as Huawei is directed to no longer be an option?

I will let you decide, but the settings are pushing all boundaries and overall the people tend to not benefit, unless you work for the right part of Palantir inc, at which point your income could double between now and 2021.

 

2018 – DEFENCE INNOVATION – ALLESLEV DRAFT REPORT – 078 STC 18 E

3 Comments

Filed under Finance, Gaming, IT, Media, Military, Politics, Science

How to design a death trap

The Grenfell inquiry is still going on and the last testimony from Dr Barbara Lane is not just an eye opener, it shows two elemental parts. The first is that the ‘stay put’ scenario could never have worked, the second one is that the cladding itself had the additional issue of getting set against combustible materials. That does not make the person who decided on the cladding innocent, it merely proves that the people behind it all failed in spectacular ways. The first part given is “Styrofoam core panels were installed between the new windows and around kitchen vents; ethylene propylene diene terpolymer was used around the new window frames; and polyurethane expanding foam was used to fill joints in the insulation and in gaps between new windows and walls – all combustible materials. She also found combustible polymeric foam above some windows, even though there was no evidence of it being specified, and polyisocyanurate foam that was not in the design” This states that not only was there more combustible materials, there was additional combustible materials that were not even part of the design. So someone acted, someone approved those additional costs. Then we get the first killer. With “horizontal cavity barriers designed to stop fire spreading through the facade had wrongly been installed vertically. They feature an intumescent strip that is meant to expand and close the gap during a fire, but some of these barriers were installed facing into the existing concrete, rendering them useless. She said some of the required cavity barriers had simply not been installed around windows“, we see not merely a construction error, a direct flaw on parts that would stop fires, or at least largely decrease the speed was done wrong and now we see that the building had ‘vent columns‘ to allow the fire to reach maximum speed. At this point, we have issues with procurement, with the installation and construction inspection. Optionally, the architectural setting was wrong, which gives us a failing on nearly every level from the council to the person telling the man with the drill what to do and where to do it. I think that this is a first for me, to see failing to this degree. The stay put was basically a death sentence in 30 minutes. It is the additional “more than 100 fire doors inside Grenfell did not meet fire regulations” that gives the light that the corridors would have been as deadly as the apartment to stay put in, in close to 30 minutes. She gives a few more points, but at this stage, what she gives out is that the killing blow would have been close to a given when those remained inside beyond the first 15 minutes. The article ends with “The same compartmentalisation strategy was essential for firefighting internally, which relied on a working firefighting lift, protected lobbies, ways of getting water up the buildings, a protected space between the firefighting stair and the flats. All of these failed to one degree or another“, now we see that Grenfell was a death-trap for tenants and firefighters alike, the fact that no firefighter died that day is a small miracle to say the least.

So in all this, when we consider the Telegraph article a day earlier (a clear reason for a second Leveson), we see a different side. The article job is a hatchet job by Hayley Dixon, a person who should not be allowed in journalism (a personal belief on mine due to this one article). So when we get back to the title ‘Grenfell survivors question why it took 15 minutes for firefighters to tackle initial blaze‘, and as Hayley Dixon published this at 21:30 local time the previous day. Was this the result of writers block? Was this a mere emotional writing of 104 words to meet a deadline requirement? If so, how irresponsible is the editor? When we put the Telegraph article next to the Independent, the Guardian and the testimony of Dr Barbara Lane, we are confronted with the emotional push of some kind? You see, the setting we see now, the videos that are online and the pictures clearly show that there was nothing normal about the fire and that Grenfell was a constructed death-trap in the shape of a Roman candle. Additional views (from the Independent) gave us “One survivor reported that building’s dry risers – vertical pipes used by firefighters to distribute water to multiple levels of a building – were not working“, so in all this, how was the Telegraph article not merely a waste of space and existence?

This entire fish gets another flavour when we consider an earlier BBC article (at https://www.bbc.com/news/uk-40330789). In this we see “Four ministers – all from the Department for Communities and Local Government – received letters but did not strengthen the regulations. Ronnie King, a former chief fire officer who sits on the group, says the government has ignored repeated warnings about tower block safety. “We have spent four years saying ‘Listen, we have got the evidence, we’ve provided you with the evidence, there is clear public opinion towards this, you ought to move on this’,” said Mr King.”” we would expect that at least some move would be made and even as the cladding and other issues now showing would not have stopped anything, better regulations might have at least delayed enough for people to reconsider getting out. So who gets to be on the front page? Yes it is Liberal Democrat MP Stephen Williams – who was then a minister in the department – replied: “I have neither seen nor heard anything that would suggest that consideration of these specific potential changes is urgent and I am not willing to disrupt the work of this department by asking that these matters are brought forward“. This can be countered by the BBC (at https://www.bbc.com/news/uk-40422922, where we see “London Fire Brigade warned all 33 councils about the potential risks of external cladding on tower blocks in May this year, the BBC has learned. It followed tests on panels from a high rise that suffered a fire last August. The insulation panels were made up of polystyrene and plywood, and tests concluded they were the likely cause of the fire spreading up the outside“, so there was clear evidence from May 2017 (after his ‘reign’), yet the issues had been clear put forward in 2014 when he was there. He remains in our sights when we realise that this had been going on since 2009, as it was highlighted at the coroner’s inquest into a fire at Lakanal House in Camberwell in 2009, which led to the deaths of six people, including three children. So at that point, the words of Liberal Democrat MP Stephen Williams become a statement of falsehood the moment he spoke them in 2014. When we hear ‘I am not willing to disrupt the work of this department by asking that these matters are brought forward‘, whilst there is a clear coroner’s inquest regarding 6 people, including 3 children, when did ‘disrupt the work of this department‘ become an accepted answer?

I am not sure if we could blame the London Fire Brigade from walking away in the future and let 100% of London burn down, you know, they would not want to ‘disrupt any department‘ by caring, now would they?

The fact is just slightly too dark when we consider that there was ample evidence up to 9 years before the Grenfell blaze. If there is one positive, we might see a change where councils need the office of Dany Cotton, or the office of her previous post where she was the Director of Safety and Assurance at the London Fire Brigade, to sign off on any refurbishment before allowing it to happen. It would optionally stop every council from seeking a ‘short cut’ to adhere to the wishes of rich investors. I am mentioning this, because it will have to be said again and again that the refurbishment and cladding was added “a low-cost way of improving the front of the building – was chosen in part so that the tower would look better when seen from the conservation areas and luxury flats that surround North Kensington, according to planning documents, as well as to insulate it” (source: The Independent). So as luxury flat owners nearby thought Grenfell was too yucky, it ended up being upgraded from apartment building to Roman candle.

I believe that the testimony of Dr Barbara Lane is one of the most damaging to the council, the constructors and decision makers in the refurbishment of Grenfell we have ever seen, the question will turn soon enough into: ‘how many death-traps are there in London?’ It is merely my personal view that there is a level of complacency to set the economic values of London in a way that might be way too dangerous for the people living there. If we see these issues in North Kensington and Chelsea, what would we find if there was an actual serious look at a council like Islington? The fact that Islington is overcrowded, it is growing in the sparkling area for socialites and professionals, so the visibility is high. Even as the London Metropolitan Police is working hard to lower the rising crime number, the impact of a Grenfell like event in Islington will do more than merely burn a building and the people in there. now, let’s also realise that Islington is nowhere near the worst, Also, the high rise situation seems a lot better, yet the overcrowded part seems to give ‘rise’ to other considerations and whilst we all focus on high rises, there are other ways for fires to propagate. Another reason to raise Islington is that so far its housing strategy (2014-2019) looks nice (as all brochures are), we also see that house prices are close to 50% higher than the London average, so the damage is a lot bigger if things do go pear shaped. I also raised it as I know it decently well, yet the brochure on page 29, who gives us all the acts and strategies and legislation gives no voice to the fire dangers. The Housing Act 2004 does give two mentions, ‘Consultation with fire and rescue authorities in certain cases‘ as well as ‘miscellaneous repeals etc. in relation to fire hazards‘, yet there is more. You see even as the brochure might look less sexy by mentioning an issue like: “Depending on the type of property and how it is occupied some or all of the following will apply:

  • the Building Regulations 2010 Part B
  • Housing Health & Safety Rating System
  • The Smoke and Carbon Monoxide Alarm (England) Regulations 2015
  • The Regulatory Reform (Fire Safety) Order 2005

The issue we see with Grenfell is the lack of fire prevention focus, the Housing Strategy for Islington 2014-2019 shows that there is a mere reference to the Housing Act 2004, yet housing strategy is a lot larger towards tenancy and Asset management, and in a place as overcrowded as Islington it could become a problem. Now we understand that Grenfell is only a year old, yet there is additional evidence on several levels that this is an issue that had been going on since 2009, so even as we ‘brand’ Liberal Democrat MP Stephen Williams by his extremely poorly chosen words. He is not alone in not having a much larger fire safety focus. The question becomes if the councils were much stronger on fire prevention, would Grenfell have been prevented? My personal believe is that this would be an absolute certain. The failings that Dr Barbara Lane gave testimony on reflects the failing on nearly every level, so as more levels need to mandatory look at certain hazards, issues would have been brought to light (a personal belief), in this London (not just Kensington and Chelsea) have a much larger workload to content with and these changes would require a reflection on a multitude of levels in the coming year. Even as we accept that voices from Islington stated “Fire safety in Islington. We are the landlord/freeholder for over 35,000 households, and we take our responsibility for your safety very seriously“, we accept that this is a response to Grenfell, yet the housing strategy also shown that there was not enough focus in the past. One additional page in that brochure on certain (read: specific) hazards could have given light that the Islington council had that focus, we now merely see (read: expect) that this is not entirely the case.

London and a lot more metropolitan areas like London mind you will have to adjust their current course on actions and considerations when it comes to fire hazard, because we do not want the London population to wake up looking at the speculative sights shown below from a distance.

Rotterdam 1940

 

OR

Hawaii 2012

Leave a comment

Filed under Law, Media, Politics

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science

The Sleeping Watchdog

Patrick Wintour, the Guardian’s diplomatic editor is giving us merely a few hours ago [update: yesterday 13 minutes before an idiot with a bulldozer went through the fiber optical cable] before the news on OPCW. So when we see “a special two-day session in late June in response to Britain’s call to hand the body new powers to attribute responsibility for chemical weapons attacks“, what does that mean? You see, the setting is not complex, it should be smooth sailing, but is it?

Let’s take a look at the evidence, most of it from the Guardian. I raised issues which started as early as March 2018 with ‘The Red flags‘ (at https://lawlordtobe.com/2018/03/27/the-red-flags/), we see no evidence on Russian handling, we see no evidence on the delivery, merely a rumour that ‘More than 130 people could have been exposed‘ (‘could’ being the operative word) and in the end, no fatalities, the target survived. Whilst a mere silenced 9mm solution from a person doing a favour for Russian businessman Sergey Yevgenyevich Naryshkin would have done the trick with no fuss at all. And in Russia, you can’t even perceive the line of Russians hoping to be owed a favour by Sergey Yevgenyevich Naryshkin. In addition, all these months later we still have not seen any conclusive evidence of ANY kind that it was a Russian state based event. Mere emotional speculations on ‘could’ ‘might be‘ as well as ‘expected‘. So where do we stand?

A little later in April, we see in the article ‘Evidence by candlelight‘ (at https://lawlordtobe.com/2018/04/04/evidence-by-candlelight/), the mere conclusion ‘Porton Down experts unable to verify precise source of novichok‘, so not only could the experts not determine the source (the delivery device), it also gives weight to the lack of evidence that it was a Russian thing. Now, I am not saying that it was NOT Russia, we merely cannot prove that it was. In addition, I was able to find several references to a Russian case involving Ivan Kivelidi and Leonard Rink in 1995, whilst the so called humongous expert named Vil Mirzayanov stated ““You need a very high-qualified professional scientist,” he continued. “Because it is dangerous stuff. Extremely dangerous. You can kill yourself. First of all you have to have a very good shield, a very particular container. And after that to weaponize it – weaponize it is impossible without high technical equipment. It’s impossible to imagine.”” I do not oppose that, because it sounds all reasonable and my extended brain cells on Chemical weapons have not been downloaded yet (I am still on my first coffee). Yet in all this the OPCW setting was in 2013: “Regarding new toxic chemicals not listed in the Annex on Chemicals but which may nevertheless pose a risk to the Convention, the SAB makes reference to “Novichoks”. The name “Novichok” is used in a publication of a former Soviet scientist who reported investigating a new class of nerve agents suitable for use as binary chemical weapons. The SAB states that it has insufficient information to comment on the existence or properties of “Novichoks”“, I can accept that the OPCW is not fully up to speed, yet the information from 1995, 16 years earlier was the setting: ““In 1995, a Russian banking magnate called Ivan Kivelidi and his secretary died from organ failure after being poisoned with a military grade toxin found on an office telephone. A closed trial found that his business partner had obtained the substance via intermediaries from an employee of a state chemical research institute known as GosNIIOKhT, which was involved in the development of Novichoks“, which we got from the Standard (at https://www.independent.co.uk/news/uk/crime/uk-russia-nerve-agent-attack-spy-poisoning-sergei-skripal-salisbury-accusations-evidence-explanation-a8258911.html), so when you realise these settings, we need to realise that the OPCW is flawed on a few levels. It is not the statement “the OPCW has found its methods under attack from Russia and other supporters of the Syrian regime“, the mere fact that we see in regarding of Novichoks implies that the OPCW is a little out of their depth, their own documentation implies this clearly (as seen in the previous blog articles), I attached one of them in the article ‘Something for the Silver Screen?‘ (at https://lawlordtobe.com/2018/03/17/something-for-the-silver-screen/), so a mere three months ago, there has been several documents all out in the open that gives light to a flawed OPCW, so even as we accept ‘chemist says non-state actor couldn’t carry out attack‘, the fact that it did not result in fatalities gives us that it actually might be a non-state action, it might not be an action by any ‘friend’ of Sergey Yevgenyevich Naryshkin or Igor Valentinovich Korobov. These people cannot smile, not even on their official photos. No sense of humour at all, and they tend to be the people who have a very non-complementary view on failure. So we are confronted not merely with the danger of Novichoks, or with the fact that it very likely in non-state hands. The fact that there is no defence, not the issue of the non-fatalities, but the fact that the source could not be determined, is the dangerous setting and even as we hold nothing against Porton Down, the 16 year gap shown by the OPCW implies that the experts relied on by places like Porton Down are not available, which changes the landscape by a lot and whilst many will wonder how that matters. That evidence could be seen as important when we reconsider the chemical attacks in Syria on 22nd August 2011, so not only did the US sit on their hands, it is now not entirely impossible that they did not have the skills at their disposal to get anything done. Even as a compound like Sarin is no longer really a mystery, the setting we saw then, gives us the other part. With the Associated Press giving us at the time “anonymous US intelligence officials as saying that the evidence presented in the report linking Assad to the attack was “not a slam dunk.”” Is one part, the fact that all the satellites looking there and there is no way to identify the actual culprit is an important part. You see we could accept that the Syrian government was behind this, but there is no evidence, no irrefutable fact was ever given. That implies that when it comes to delivery systems, there is a clear gap, not merely for Novichoks, making the entire setting a lot less useful. In this the website of the OPCW (at https://www.opcw.org/special-sections/syria-and-the-opcw/) is partial evidence. When we see “A total of 14 companies submitted bids to undertake this work and, following technical and commercial evaluation of the bids, the preferred bidders were announced on 14th February 2014. Contracts were signed with two companies – Ekokem Oy Ab from Finland, and Veolia Environmental Services Technical Solutions in the USA” in light of the timeline, implies that here was no real setting and one was implemented after Ghouta, I find that part debatable and not reassuring. In addition, the fact finding mission was not set up until 2014, this is an issue, because one should have been set up on the 23rd August 2011, even as nothing would have been available and the status would have been idle (for very valid reasons), the fact that the fact finding mission was not set up until 2014, gives light to even longer delays. In addition, we see a part that has no blame on the OPCW, the agreement “Decides further that the Secretariat shall: inspect not later than 30 days after the adoption of this decision, all facilities contained in the list referred to in paragraph 1(a) above;“, perfect legal (read: diplomacy driven) talk giving the user of those facilities 30 days to get rid of the evidence. Now, there is no blame on the OPCW in any way, yet were these places not monitored by satellites? Would the visibility of increased traffic and activities not given light to the possible culprit in this all? And when we look at the paragraph 1(a) part and we see: “the location of all of its chemical weapons, chemical weapons storage facilities, chemical weapons production facilities, including mixing and filling facilities, and chemical weapons research and development facilities, providing specific geographic coordinates;“, is there not the decent chance (if the Syrian government was involved, that ‘all locations‘ would be seen as ‘N-1‘, with the actual used fabrication location used conveniently missing from the list? #JustSaying

It seems to me that if this setting is to be more (professional is the wrong word) capable to be effective, a very different setting is required. You see, that setting becomes very astute when we realise that non-state actors are currently on the table, the danger that a lone wolf getting creative is every bit as important to the equation. the OPCW seems to be in a ‘after the fact‘ setting, whilst the intelligence community needs an expert that is supportive towards their own experts in a pro-active setting, not merely the data mining part, but the option to see flagged chemicals that could be part of a binary toxic setting, requires a different data scope and here we see the dangers when we realise that the ‘after the fact‘ setting with a 16 year gap missing the danger is something that is expensive and equally, useless would be the wrong word, but ‘effective’ it is not, too much evidence points at that. For that we need to see that their mission statement is to ‘implement the provisions of the Chemical Weapons Convention (CWC) in order to achieve the OPCW’s vision of a world that is free of chemical weapons and of the threat of their use‘, yet when we look at the CWC charter we see: ‘The Convention aims to eliminate an entire category of weapons of mass destruction by prohibiting the development, production, acquisition, stockpiling, retention, transfer or use of chemical weapons by States Parties. States Parties, in turn, must take the steps necessary to enforce that prohibition in respect of persons (natural or legal) within their jurisdiction‘, which requires a pro-active setting and that is definitely lacking from the OPCW, raising the issue whether their mandate is one of failure. That requires a very different scope, different budgets and above all a very different set of resources available to the OPCW, or whoever replaces the OPCW, because that part of the discussion is definitely not off the table for now. The Salisbury event and all the available data seems to point in that direction.

 

Leave a comment

Filed under Media, Politics, Science

Bang Bang Common Sense

Jason Wilson brought to light an article (at https://www.theguardian.com/world/2018/jun/03/us-senate-hopeful-washington-joey-gibson) that made me think. You see, I am pragmatic and pro guns, I never hid that. Yet in equal measure I have an issue with people bringing their guns to a night club, especially when they are not members of organised crime. So, when you do a dancing backflip and accidently shoot a person as you pick up your gun, FBI agent or not, it raises questions.

This is not me having a go at that officer, there might be a very valid reason for him to have had his piece on him, but making backflips (impressive as it may be) was not the brightest thought to be having. Yet that was not what this will be about. You see, Joey Gibson, the far right Republican Senate candidate is advocating what I call a scenario too dangerous for words. With: “That’s why we’re doing it, there’s people dying. Gun-free zones disgust me because we’re not protecting the kids on the campus. People look at it backwards“, the dangerous precedent is set. Those who do not know, or have proper skill to counter an armed attack end up being dead and handing additional weapons and ammunition to the attackers. I think we all realise that the setting of having an armed response team in any University might not be the worst idea. In that we need to realise that there are trained professionals from the Army, Marines, Navy and police that are now retired that might be more than willing to be there, making a few dollars and being there when there is real trouble. In the first hour it could lower or even prevent fatalities. Making the University a no gun-free zone, letting anyone have a go is not just stupid; it is very dangerous, that approach will increase casualties by a lot. The moment these extreme thinking or mental health cases realise that the university have additional guns and ammunition up for grabs, they might just take the leap with one gun and one clip, which is a realistic and serious danger. Until you have shot a person, or are in the second to shoot someone, that is when you realise that you have what it takes, or not and that second group will be arming the attackers. The second consideration is weapon skill. You might have shot at these nice targets on the range, or puppets standing still, but once they are moving, being accurate is something that would become too unpredictable. So here I am, as a virtual supporter of the NRA stating that this setting is way too dangerous to consider. I never had any kids, but I realise the need to protect the next generation and letting everyone armed on the university makes the danger worse, not safer.

Yet the issue is larger, you see Joey Gibson is not some right extremist. As a Japanese American (or is that American Japanese?) we see that he denounces white supremacists, advocates peaceful actions and is outspokenly anti-antifa (anti-fascist movement). Most of this was seen last year (at https://www.washingtontimes.com/news/2017/sep/3/patriot-prayer-free-speech-group-urges-supporters-/).  It was Valerie Richardson that gave the goods in the Washington Times. The issue becomes more murky when we see “So many people were so disgusted about how they treated us. The liberals were literally standing around with peace signs and love signs while antifa is just yelling and cussing and beating the crap out of us and pepper-spraying us“, which gets us to the question why would anyone pepper spray a person advocating peace? Even as the article gives us a lot, I think we are missing out, a better in depth article by a writer (Valerie or someone else) who would actually to an in depth view of Joey Gibson, especially if that person is running for the senate. It seems that the one person giving a decent and perhaps the most valid view was Daveed Walzer Panadero who gave us “urging antifa to stop trying to silence Mr. Gibson and “get that man a podium and a mike.”“, that makes sense, because if we do not know what he stands for, you cannot make up your registered voting mind.

Yet as we go back to the article, where exactly is he plotting? So far he seems to be out in the open. Yet I also acknowledge the setting we see with: “Speakers with handguns or rifles addressed a small crowd in McGraw Square, at the heart of a busy shopping district. At the other side of the square, around 10 members of an armed leftist group, the Puget Sound John Brown Gun Club, stood watching for what their spokesman called a “known white supremacist element”. They carried AR-15s and side arms“, it is a dangerous setting! You see, it only takes one person to lose his/her cool and we end up in a setting where 20 rifles will be used and there is actually zero chance of innocent bystanders not getting hurt. As a pro gun person, I recognise that danger and I see levels or irresponsibility that is way too high, because the trial that follows will all be about ‘the blame game’ and there will be no one around being able to tell who was the first one shooting, in all likelihood that person would be deceased including optionally dozens of others.

The two sided knife is that gun banning will not work, not ever (those who say it will in America are plain nuts). The open gun policy is equally dangerous and until we recognise the fact that guns do not kill people, people kill people this situation will not get better. As I wrote before, until the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) gets a real incentive of resources and funds, this situation will never ever improve. In that regard, Joey Gibson can preach and pray all he likes, yet the setting of no gun-free zones are just too dangerous, that alone might defeat his bid for the Senate or Congress. You see, as I discussed last February with ‘United they grow‘ (at https://lawlordtobe.com/2018/02/22/united-they-grow/), as well as ‘In continuation of views‘ (at https://lawlordtobe.com/2018/02/23/in-continuation-of-views/), we see that the issue was not the NRA, in a much larger setting the issue is with the ATF and the media, as well as the woolly people proclaiming that the NRA is killing their children is the massive issue that the ATF cannot get anything done due to a lack of funds and resources. The largest setting that can do something is not allowed to do anything and the people remain ignorant, deaf and blind to that part of the equation, which implies that not only are things not changing for the better, the view that Joey Gibson is giving us is that no actual progress will be possible adding to the no gun-free zones debacle, it is just too dangerous. Recognising that one element solves a lot of issues and could make changes for the better, yet the ATF is just bound by a budget that is 10 years old, resources closer to 15 years outdated and an absence of clear leadership that goes back from before the Obama administration, so why would progress ever be made?

So by the time we get to the explosives directive of the ATF, we might wonder how many buildings in New York and Los Angeles are still standing at present. Is it not interesting that we are kept in the dark on that setting?

Yet, when we get back to Joey Gibson, there is one side that most were not aware of and it is awesome that Jason Wilson gives us that view. With “Washington is seen as a Democratic state, but that impression conceals a deep divide between urban and rural, west and east, characteristic of west coast states. Money, power and population are centred on Seattle, which is often resented by rural conservatives in the state’s eastern half. Gibson’s rhetoric has always been stridently critical of the liberal cities. In Seattle, he said the city “despises patriots” and “will spit in your face for loving the constitution”“, which most (including me would not have been aware of), so when we consider King and Pierce county to represent 1/3 of the entire state, we see another picture entirely, oh and by the way these two are overwhelmingly Democratic. Even as we might accept Sightline on ‘follow the money‘ (at http://www.sightline.org/2016/10/11/following-the-money-in-washington-state-elections-part-1/), as it shows us issues on campaign funding, it does not give us the influence that the wealthy have in some districts in the east, the results say that this is not the case, yet there is an issue when we look at the map (at https://www.nytimes.com/elections/results/washington). The speculated issue is that rural Washington State is left to fend for itself. We can understand that the logic requires the funds to be set on the coastal area where the cities are, but when we see the Yakima herald (at http://www.yakimaherald.com/news/local/with-percent-in-program-food-stamp-cuts-could-hit-yakima/article_c3fe8d18-429e-11e7-9396-67c7dd7bbd33.html), we see that the cuts are rougher and still in place. That sets the stage for people like Joey Gibson to take the stage and his view does not imply that he is extreme in his thinking, yet the setting of inequality is a much larger issue and it does set the stage that tends to lean to extreme right thinking. Anti-government thinking in a stage where places like Seattle, Vancouver and Bellingham are taken care of, whilst the rest is largely ignored is not a healthy way to move forward. The slightline view on corporate sponsoring merely increases the issue on a view of inequality. That is where (as I personally see it) the right wing foundation comes from and even as it implies that Joey Gibson has no real chance. He is up against Maria Cantwell, who has shown to be pro-business, a successful job creator and stopped Artic drilling which makes her the additional sweetheart of the green parties. As a resident of the Snohomish county and being pro-business she has funding from King, Thurston and Clark County on her side which is almost a third of her state. The pro-business part should also give her Bellingham and if done correctly with the right agreements should deliver Spokane to her and at that point it is pretty much game over for Joey Gibson. So even as we see ‘Joey Gibson and plots’, the setting in Washington State is not ideal for him, apart from the mere common sense that his idea is not one that will work, there will be decreased safety from his gunpoint of view and that will cost him votes as well, especially when one piece of evidence is shown that children would be endangered from his viewpoint, an issue that will come up, with a certainty of close to 100%.

I like the approach he took. Not from the pro-gun point, but from the mere common sense that the installation of no gun-free zones is more than likely to be the start of more casualties. You see, the firearms death rate is low in Washington State and in the lowest tier that is 3.4-9 per 100,000. Washington State is exactly on the 9 border with 686 casualties. It only takes one event to put them in the 9.1-11.0 per 100,000 which takes the entire state to a higher tier, so one event and it is game over for Joey Gibson (source: CDC). In addition the Washington State health services also give us that 2008-2010 data gives 585 firearms casualties, whilst only 119 were homicide, 9 were unintentional and the largest group was suicide with 455. In that regard gun banning would not have any significant change, because when there is no gun, there will still be the opportunity for razors, sleeping tablets, a bathtub and the three in combination with nice soothing filled bathtub. So that will still happen one way or the other, considering that it is on par with motor vehicle crashes (both 8.6 per 100,000) gives additional rise to gun banning not making a difference in the state. Yet the Joey Gibson change is very likely to impact that in a very negative way, where he ends up defeating himself. The direct solution is also seen here, if the ATF had done their job (with proper resources and funding available) there is every chance that the suicide rate would have been positively influences and as that side is 77% of the fire arms fatalities, a chunk of it prevented as assistance to overcome mental hardship was given. Is that not an interesting overlooked fact? And it is not the only one, there are plenty more where that came from, fatalities all preventable by giving the ATF the right tools, resources and staff members.

 

Leave a comment

Filed under Finance, Media, Politics

Data illusions

Yesterday was an interesting day for a few reasons; one of the primary reasons was an opinion piece in the Guardian by Jay Watts (@Shrink_at_Large). Like many article I considered to be in opposition, yet when I reread it, this piece has all kinds of hidden gems and I had to ponder a few items for an hour or so. I love that! Any piece, article or opinion that makes me rethink my position is a piece well worth reading. So this piece called ‘Supermarkets spy on them now‘ (at https://www.theguardian.com/commentisfree/2018/may/31/benefits-claimants-fear-supermarkets-spy-poor-disabled) has several sides that require us to think and rethink issues. As we see a quote like “some are happy to brush this off as no big deal” we identify with too many parts; to me and to many it is just that, no big deal, but behind the issues are secondary issues that are ignored by the masses (en mass as we might giggle), yet the truth is far from nice.

So what do we see in the first as primary and what is behind it as secondary? In the first we see the premise “if a patient with a diagnosis of paranoid schizophrenia told you that they were being watched by the Department for Work and Pensions (DWP), most mental health practitioners would presume this to be a sign of illness. This is not the case today.” It is not whether this is true or not, it is not a case of watching, being a watcher or even watching the watcher. It is what happens behind it all. So, when we recollect that dead dropped donkey called Cambridge Analytics, which was all based on interacting and engaging on fear. Consider what IBM and Google are able to do now through machine learning. This we see in an addition to a book from O’Reilly called ‘The Evolution of Analytics‘ by Patrick Hall, Wen Phan, and Katie Whitson. Here we see the direct impact of programs like SAS (Statistical Analysis System) in the application of machine learning, we see this on page 3 of Machine Learning in the Analytic Landscape (not a page 3 of the Sun by the way). Here we see for the government “Pattern recognition in images and videos enhance security and threat detection while the examination of transactions can spot healthcare fraud“, you might think it is no big deal. Yet you are forgetting that it is more than the so called implied ‘healthcare fraud‘. It is the abused setting of fraud in general and the eagerly awaited setting for ‘miscommunication’ whilst the people en mass are now set in a wrongly categorised world, a world where assumption takes control and scores of people are now pushed into the defence of their actions, an optional change towards ‘guilty until proven innocent’ whilst those making assumptions are clueless on many occasions, now are in an additional setting where they believe that they know exactly what they are doing. We have seen these kinds of bungles that impacted thousands of people in the UK and Australia. It seems that Canada has a better system where every letter with the content: ‘I am sorry to inform you, but it seems that your system made an error‘ tends to overthrow such assumptions (Yay for Canada today). So when we are confronted with: “The level of scrutiny all benefits claimants feel under is so brutal that it is no surprise that supermarket giant Sainsbury’s has a policy to share CCTV “where we are asked to do so by a public or regulatory authority such as the police or the Department for Work and Pensions”“, it is not merely the policy of Sainsbury, it is what places like the Department for Work and Pensions are going to do with machine learning and their version of classifications, whilst the foundation of true fraud is often not clear to them, so you want to set a system without clarity and hope that the machine will constitute learning through machine learning? It can never work, that evidence is seen as the initial classification of any person in a fluidic setting is altering on the best of conditions. Such systems are not able to deal with the chaotic life of any person not in a clear lifestyle cycle and people on pensions (trying to merely get by) as well as those who are physically or mentally unhealthy. These are merely three categories where all kind of cycles of chaos tend to intervene with their daily life. Those are now shown to be optionally targeted with not just a flawed system, but with a system where the transient workforce using those methods are unclear on what needs to be done as the need changes with every political administration. A system under such levels of basic change is too dangerous to get linked to any kind of machine learning. I believe that Jay Watts is not misinforming us; I feel that even the writer here has not yet touched on many unspoken dangers. There is no fault here by the one who gave us the opinion piece, I personally believe that the quote “they become imprisoned in their homes or in a mental state wherein they feel they are constantly being accused of being fraudulent or worthless” is incomplete, yet the setting I refer to is mentioned at the very end. You see, I believe that such systems will push suicide rates to an all-time high. I do not agree with “be too kind a phrase to describe what the Tories have done and are doing to claimants. It is worse than that: it is the post-apocalyptic bleakness of poverty combined with the persecution and terror of constantly feeling watched and accused“. I believe it to be wrong because this is a flaw on both sides of the political aisle. Their state of inaction for decades forced the issue out and as the NHS is out of money and is not getting any money the current administration is trying to find cash in any way that they can, because the coffers are empty, which now gets us to a BBC article from last year.

At http://www.bbc.com/news/election-2017-39980793, we saw “A survey in 2013 by Ipsos Mori suggested people believed that £24 out of every £100 spent on benefits was fraudulently claimed. What do you think – too high, too low?
Want to know the real answer? It is £1.10 for every £100
“. That is the dangerous political setting as we should see it; the assumption and believe that 24% is set to fraud when it is more realistic that 1% might be the actual figure. Let’s not be coy about it, because out of £172.3bn a 1% amount still remains a serious amount of cash, yet when you set it against the percentage of the UK population the amount becomes a mere £25 per person, it merely takes one prescription to get to that amount, one missed on the government side and one wrongly entered on the patients side and we are there. Yet in all that, how many prescriptions did you the reader require in the last year alone? When we get to that nitty gritty level we are confronted with the task where machine learning will not offer anything but additional resources to double check every claimant and offense. Now, we should all agree that machine learning and analyses will help in many ways, yet when it comes to ‘Claimants often feel unable to go out, attempt voluntary work or enjoy time with family for fear this will be used against them‘ we are confronted with a new level of data and when we merely look at the fear of voluntary work or being with family we need to consider what we have become. So in all this we see a rightful investment into a system that in the long run will help automate all kinds of things and help us to see where governments failed their social systems, we see a system that costs hundreds of millions, to look into an optional 1% loss, which at 10% of the losses might make perfect sense. Yet these systems are flawed from the very moment they are implemented because the setting is not rational, not realistic and in the end will bring more costs than any have considered from day one. So in the setting of finding ways to justify a 2015 ‘The Tories’ £12bn of welfare cuts could come back to haunt them‘, will not merely fail, it will add a £1 billion in costs of hardware, software and resources, whilst not getting the £12 billion in workable cutbacks, where exactly was the logic in that?

So when we are looking at the George Orwell edition of edition of ‘Twenty Eighteen‘, we all laugh and think it is no great deal, but the danger is actually two fold. The first I used and taught to students which gets us the loss of choice.

The setting is that a supermarket needs to satisfy the need of the customers and the survey they have they will keep items in a category (lollies for example) that are rated ‘fantastic value for money‘ and ‘great value for money‘, or the top 25th percentile of the products, whatever is the largest. So in the setting with 5,000 responses, the issue was that the 25th percentile now also included ‘decent value for money‘. So we get a setting where an additional 35 articles were kept in stock for the lollies category. This was the setting where I showed the value of what is known as User Missing Values. There were 423 people who had no opinion on lollies, who for whatever reason never bought those articles, This led to removing them from consideration, a choice merely based on actual responses; now the same situation gave us the 4,577 people gave us that the top 25th percentile only had ‘fantastic value for money‘ and ‘great value for money‘ and within that setting 35 articles were removed from that supermarket. Here we see the danger! What about those people who really loved one of those 35 articles, yet were not interviewed? The average supermarket does not have 5,000 visitors, it has depending on the location up to a thousand a day, more important, when we add a few elements and it is no longer about supermarkets, but government institutions and in addition it is not about lollies but Fraud classification? When we are set in a category of ‘Most likely to commit Fraud‘ and ‘Very likely to commit Fraud‘, whilst those people with a job and bankers are not included into the equation? So we get a diminished setting of Fraud from the very beginning.

Hold Stop!

What did I just say? Well, there is method to my madness. Two sources, the first called Slashdot.org (no idea who they were), gave us a reference to a 2009 book called ‘Insidious: How Trusted Employees Steal Millions and Why It’s So Hard for Banks to Stop Them‘ by B. C. Krishna and Shirley Inscoe (ISBN-13: 978-0982527207). Here we see “The financial crisis appears to be exacerbating fraud by bank employees: a new survey found that 72 percent of financial institutions say that in the last 12 months they have experienced a case of data theft by one of their workers“. Now, it is important to realise that I have no idea how reliable these numbers are, yet the book was published, so there will be a political player using this at some stage. This already tumbles to academic reliability of Fraud in general, now for an actual reliable source we see KPMG, who gave us last year “KPMG survey reveals surge in fraud in Australia“, with “For the period April 2016 to September 2016, the total value of frauds rose by 16 percent to a total of $442m, from $381m in the previous six month period” we see number, yet it is based on a survey and how reliable were those giving their view? How much was assumption, unrecognised numbers and based on ‘forecasted increases‘ that were not met? That issue was clearly brought to light by the Sydney Morning Herald in 2011 (at https://www.smh.com.au/technology/piracy-are-we-being-conned-20110322-1c4cs.html), where we see: “the Australian Content Industry Group (ACIG), released new statistics to The Age, which claimed piracy was costing Australian content industries $900 million a year and 8000 jobs“, yet the issue is not merely the numbers given, the larger issue is “the report, which is just 12 pages long, is fundamentally flawed. It takes a model provided by an earlier European piracy study (which itself has been thoroughly debunked) and attempts to shoe-horn in extrapolated Australian figures that are at best highly questionable and at worst just made up“, so the claim “4.7 million Australian internet users engaged in illegal downloading and this was set to increase to 8 million by 2016. By that time, the claimed losses to piracy would jump to $5.2 billion a year and 40,000 jobs” was a joke to say the least. There we see the issue of Fraud in another light, based on a different setting, the same model was used, and that is whilst I am more and more convinced that the European model was likely to be flawed as well (a small reference to the Dutch Buma/Stemra setting of 2007-2010). So not only are the models wrong, the entire exercise gives us something that was never going to be reliable in any way shape or form (personal speculation), so in this we now have the entire Machine learning, the political setting of Fraud as well as the speculated numbers involved, and what is ‘disregarded’ as Fraud. We will end up with a scenario where we get 70% false positives (a pure rough assumption on my side) in a collective where checking those numbers will never be realistic, and the moment the parameters are ‘leaked’ the actual fraudulent people will change their settings making detection of Fraud less and less likely.

How will this fix anything other than the revenue need of those selling machine learning? So when we look back at the chapter of Modern Applications of Machine Learning we see “Deploying machine learning models in real-time opens up opportunities to tackle safety issues, security threats, and financial risk immediately. Making these decisions usually involves embedding trained machine learning models into a streaming engine“, that is actually true, yet when we also consider “review some of the key organizational, data, infrastructure, modelling, and operational and production challenges that organizations must address to successfully incorporate machine learning into their analytic strategy“, the element of data and data quality is overlooked on several levels, making the entire setting, especially in light of the piece by Jay Watts a very dangerous one. So the full title, which is intentionally did not use in the beginning ‘No wonder people on benefits live in fear. Supermarkets spy on them now‘, is set wholly on the known and almost guaranteed premise that data quality and knowing that the players in this field are slightly too happy to generalise and trivialise the issue of data quality. The moment that comes to light and the implementers are held accountable for data quality is when all those now hyping machine learning, will change their tune instantly and give us all kinds of ‘party line‘ issues that they are not responsible for. Issues that I personally expect they did not really highlight when they were all about selling that system.

Until data cleaning and data vetting gets a much higher position in the analyses ladder, we are confronted with aggregated, weighted and ‘expected likelihood‘ generalisations and those who are ‘flagged’ via such systems will live in constant fear that their shallow way of life stops because a too high paid analyst stuffed up a weighting factor, condemning a few thousand people set to be tagged for all kind of reasons, not merely because they could be optionally part of a 1% that the government is trying to clamp down on, or was that 24%? We can believe the BBC, but can we believe their sources?

And if there is even a partial doubt on the BBC data, how unreliable are the aggregated government numbers?

Did I oversimplify the issue a little?

 

 

Leave a comment

Filed under Finance, IT, Media, Politics, Science

Be not stupid

There is an article in the Guardian. Now, we all agree that anyone has their own views, that has been a given for the longest of times, and those reading my blog know that I have a different view at times, yet for the most, I remained neutral and non-attacking to those with a different view, that’s how I roll.

Today is different, the article “‘Easy trap to fall into’: why video-game loot boxes need regulation” by Mattha Busby (@MatthaBusby) got to me. It is time for people to realise that when you are over 18, you are responsible for your actions. So I have, pretty much, no patience with any American, Reddit user or not, who gives us “a Reddit user who claims to have spent $10,000“. If you are that stupid, you should not be allowed to play video games.

The Setting

To comprehend my anger, you need to realise the setting we see here. You see, loot boxes are not new. This goes all the way back to 1991 when Richard Garfield created Magic, the gathering. I was not really on board in the beginning, but I played the game. The issues connect when you realise how the product was sold. There was a starter kit (which we call the basic game) it will have enough cards to start playing the game as well as the essential cards you need to play it. To get ahead in the game you need to get boosters. Here is where it gets interesting. Dozens of games are working on the principle that Richard Garfield founded. A booster would have 9-13 cards (depending on the game), It would have 1 (read: One) rare card (or better), 3 uncommon cards and the rest would be common cards. I had several of these games I played and in the end (after 20 boosters) it was merely about collecting the rare cards if you wanted a complete set. Some would not care about it and they could play the game. So this is not a new thing, so if you truly spend $10,000 you should not complain. If you have the money it is not an issue, if you did not, you are too stupid for words. In games it is not new either. Mass Effect 3, the best multiplayer game ever (my personal view) had loot boxes as well, I am pretty sure that they were the first. Yes, you could buy them, with money, or with Microsoft credit points. The third option was that you could gather points whilst playing (at the cost of $0) and use these gained points to buy loot boxes, the solution most people used. Over time you would end up with sensational goods to truly slice and dice the opponents, all gained through play time, no extra cash required.

So when I see places like Venture beat (and the Guardian of course) state issues like: “some people, policymakers, and regulators — including the gaming authorities in Belgium and Netherlands — that those card packs have are gambling“. I see these statements as moronic and I regard them as statements of false presentation. You see, that is not what it is about! When you see the attached picture, you see that these cards are sold EVERYWHERE. The issue is that the CCG card games are sold in the shops, which means that revenue is TAXED. The online sales are not and now, policymakers are all up in arms because they lost out on a non-taxable ‘$1.25 billion during its last quarter even without releasing a major new game‘, that is the real issue and they are now all acting in falsehood. So, when I see “I am currently $15,800 in debt. My wife no longer trusts me. My kids, who ask me why I am playing Final Fantasy all the time, will never understand how I selfishly spent money I should have been using for their activities“, as well as “he became addicted to buying in-game perks, which he later described as ‘digital garbage’“. I merely see people without discipline, without proper control. So without any regard for diplomacy I will call them junkies, plain and simple. Junkies who have no idea just how stupid they are. And, since when do we adjust policy for junkies? Since when are the 99% who hold themselves all plenty accountable, have the proper discipline to not overspend and some (like me) never considered loot boxes in a game like Shadow of War, now being held to account, to lessened gaming impact by junkies? Can anyone answer me this?

Now, we need to take into consideration one or two things. Are the FIFA18 loot boxes set in a similar light? That is the one place where (seemingly) FIFA is in the wrong. You see I have been searching to get any info on what is in a FIFA loot box, but there is no information given. I believe that this lack is actually an issue, yet that could be resolved in 24 hours if Electronic Arts would dedicate 1 page (considering it brings them $1.25 billion a quarter) on what is to be found in a loot box (Rare, Uncommon, Common). The second part that I cannot answer (because I am not a soccer fan) is whether the game allows loot boxes to be earned through playing and finally. Can the game be played without loot boxes? It seems like such a small alteration to make and especially when we see the fuss that is being made now. Some additional facts can be seen in Rolling Stone Magazine of all places (at https://www.rollingstone.com/glixel/features/loot-boxes-never-ending-games-and-always-paying-players-w511655). So now that we get a fuss from several nations, nations that have been all open and accepting on games like The Decipher CCG games Star Trek and Star Wars, Magic the Gathering, The Lord of the Rings, My Little Pony, Harry Potter, Pokémon, and that list goes on for some time. In that regard, they are all gambling and in my view, I feel certain that these so called politicians and lime light seekers will do absolutely NOTHING to get anything done because the cards are subject to VAT and the online stuff is lost taxable revenue. That is what I personally see as the foundation of a corrupt administration.

You see, the fact is that it is not gambling. You buy something that is in 3 categories, Rare, Uncommon and Common, you ALWAYS get this in a setting of 1 rare, 3 uncommon and 5 common, which card you get is not a given, it is random, but they will always get that setting. Let’s for example state that the loot box is $7, you get one $3 card, three $1 cards and five $0.20 cards, so how is that gambling? For Electronic Arts, until they update the website to give a precise definition might be in waters that are a little warmer, but that can be fixed by the end of the day. Perhaps they do have such a page, but Google did not find it.

In addition, Venture Beat gave us (at https://venturebeat.com/2018/05/08/ea-ceo-were-pushing-forward-with-loot-boxes-in-face-of-regulation/) “EA will have to convince policymakers around the world that it is doing enough and that its mechanics are not the same as the kinds of games you’d find in a casino“, which is easy as these policymakers did absolutely nothing to stop CCG’s like Pokémon and My Little Pony (truly games for minors), so we can stat that this was never about the loot box, it was about missed taxable revenue, a side that all the articles seemed to have left in the dark.

The Guardian has one additional gem. With: “A bill introduced in Minnesota last month would prohibit the sale of video games with loot boxes to under-18s and require a severe warning: “This game contains a gambling-like mechanism that may promote the development of a gaming disorder that increases the risk of harmful mental or physical health effects, and may expose the user to significant financial risk.”” Here I am in the middle. I think that Americans are not that bright at times, a point of view supported with the image of paper cups with the text ‘Caution Hot’ to avoid liability if some idiot burns their mouth; we know that sanity is out of the window. Yet the idea that there should be a loot box warning is perhaps not the worst idea. I think that EA could get ahead of the curve by clearly stating in a readable font size that ‘no loot boxes are needed to play the game‘, which is actually a more apt statement (and a true one) for Shadow of War, with FIFA18, I do not know. You see, this is a changed venue, when you can add a world player to your team the equation changes. Yet, does it make it more or less enjoyable? If I play NHL with my Capitals team and I get to add Mario Lemieux and Wayne Gretsky my chances to get the Stanley cup go up, yet is that a real win or is that cheating? That is of course the other side, the side that the game maker Ubisoft enabled in their Assassins Creed series. you could unlock weapons and gear for a mere $4, they clearly stated that the player would be able to unlock the options during the game, yet some people are not really gamers, mere players with a short attention span and they want the hardware upfront. Enter the Civil war with an Uzi and a Remington, to merely coin a setting. Are they gamers, or are they cheaters? It is a fair question and there is no real answer. Some say that the game allowed them to do this, which is fair and some say, you need to earn the kills you make. We can go to it from any direction, yet when we are confronted with mere junkies going on with spending $15,800, adding to a $69 game, we are confronted with people so stupid, it makes me wonder how he got his wife pregnant in the first place. If the given debt $15,800 is true then there should be a paper trail. In that regard I am all for the fact that there should be a spending limit of perhaps $500 a month, a random number but the fact that there is a limit to spend is not the worst idea. In the end, you have to pay for the stuff, so have a barrier at that point could have imposed a limit on the spending. In addition, we can point at the quote “how I selfishly spent money I should have been using for their activities” and how that is the response of any junk to make, ‘Oh! I am so sorry‘, especially after the junk got his/her fix.

The Guardian gives in addition an actual interesting side: “Hawaiian congressman Chris Lee said “are specifically designed to exploit and manipulate the addictive nature of human psychology”“, it is a fair point to make. Are ‘game completionists’ OCD people? Can the loot box be a vessel of wrongdoing? It might, yet that still does not make it gambling or illegal, which gets us to the Minnesota setting of a warning on the box. It is an interesting option and I think that most game makers would not oppose that, because you basically are not keeping loot boxes a secret and that might be a fair call to make, as long as we are not going overboard with messages like: “This game is a digital product, it requires a working computer to install and operate“, because at that point we have gone overboard again. This as a nice contrast against: “In the Netherlands, meanwhile, lawmakers have said that at least four popular games contravene its gambling laws because items gleaned from loot box can be assigned value when they are traded in marketplaces“, which is another issue. you see when you realise that “you can’t sell any digital content that you aren’t authorized to sell” and as we also saw in Venture Beat ““While we forbid the transfer of items and in-game currency outside of the games, we also actively seek to eliminate that where it’s going on in an illegal environment,”“, we see a first part where we can leave it to the Dutch to cater to criminals on any average working day, making the lawmakers (from my personal point of view slightly short sighted).

So, in the end Mattha had a decent article, yet the foundation (the CCG games) which were the creators of the founding concept were left outside the basket of consideration, which is a large booboo, especially when we realise that they are still for sale in all these complaining countries and that in that very same regard these games are not considered gambling, which sets the stage that this was never about gambling, but several desperate EU nations, as well as the US mind you, that they are all realising that loot boxes are billions of close to non-taxable revenues. That is where the issue holds and even as I do not disagree with the honourable men from both Hawaii and Minnesota, the larger group of policy players are all about the money (and the linked limelight), an issue equally left in the dark. There is one issue against Electronic Arts, yet they can fix that before the virtual ink on the web page has dried, so that issue is non-existent as well soon enough.

It’s all in the game and this discussion will definitely be part of the E3 2018, it has reached too many governments not to do so. I reckon that on E3 Day Zero, EA and Ubisoft need to sit down in a quiet room with cold drinks and talk loot box tactics, in that regard they should invite Richard Garfield into their meeting as an executive consultant. He might give them a few pointers to up the profit whilst remaining totally fair to the gamers, a win-win for all I say! Well, not for the politicians and policy makers, but who cares about them? For those who do care about those people, I have a bridge for sale with a lovely view of Balmain Sydney, going cheap today only!

 

Leave a comment

Filed under Finance, Gaming, IT, Law, Media, Politics

It’s a kind of Euro

In Italy things are off the walls, now we see ‘New elections loom in Italy‘ (at https://www.theguardian.com/world/2018/may/27/italys-pm-designate-giuseppe-conte-fails-to-form-populist-government), where it again is about currency, this time it is Italy that as an issue with ‘country’s Eurozone future‘. In this the escalation is “the shock resignation of the country’s populist prime minister-in waiting, Giuseppe Conte, after Italy’s president refused to accept Conte’s controversial choice for finance minister“, there is a setting that is given, I have written about the folly of the EU, or better stated, the folly it became. I have been in favour of Brexit for a few reasons, yet here, in Italy the setting is not the same. “Sergio Mattarella, the Italian president who was installed by a previous pro-EU government, refused to accept the nomination for finance minister of Paolo Savona, an 81-year-old former industry minister who has called Italy’s entry into the euro a “historic mistake”“, now beside the fact that an 81 year old has no business getting elected into office for a number of reasons, the issue of anti-Euro Paolo Savona have been known for a long time. So as pro-EU Sergio Mattarella decides to refuse anyone who is anti-EU in office, we need to think critical. Is he allowed to do that? There is of course a situation where that could backfire, yet we all need to realise that Sergio Mattarella is an expert on parliamentary procedure, highly educated and highly intelligent with decades of government experience, so if he sets his mind to it, it will not happen. Basically he can delay anti-EU waves for 8 months until after the next presidential elections. If he is not re-elected, the game changes. The EU has 8 months to satisfy the hearts and minds of the Italian people, because at present those options do not look great. The fact that the populist choices are all steering towards non-EU settings is a nightmare for Brussels. They were able to calm the storm in France, but Italy was at the tail end of all the elections, we always knew that, I even pointed it out 2 years ago that this was an option. I did mention that it was an unlikely one; the escalating part is not merely the fact that this populist setting is anti-EU; it is actually much stronger anti Germany, which is a bigger issue. Whether there is an EU or not, the European nations need to find a way to work together. Having the 2 larger players in a group of 4 large players is not really a setting that works for Europe. Even if most people tend to set Italy in a stage of Pizza, Pasta and Piffle, Italy has shown to be a global player and a large one. It has its social issues and the bank and loan debts of Italy don’t help any, but Italy has had its moments throughout the ages and I feel certain that Italy is not done yet, so in that respect finding common ground with Italy is the better play to make.

In all this President Sergio Mattarella is not nearly done, we now know that Carlo Cottarelli is asked to set the stage to become the next Prime Minister for Italy. The Italian elections will not allow for an anti-EU government to proceed to leave the Euro, Sergio’s response was that: “he had rejected the candidate, 81-year-old Eurosceptic economist Paolo Savona, because he had threatened to pull Italy from the single currency “The uncertainty over our position has alarmed investors and savers both in Italy and abroad,” he said, adding: “Membership of the euro is a fundamental choice. If we want to discuss it, then we should do so in a serious fashion.”” (at http://news.trust.org//item/20180527234047-96z65/), so here we all are, the next one that wants to leave the Euro and now there is suddenly an upheaval, just like in France. Here the setting is different, because the Italian President is Pro-EU and he is doing what is legally allowed. We can go in many directions, but this was always going to be an unsettling situation. I knew that for 2 years, although at that stage Italy leaving the EU was really small at that stage. Europe has not been able to prosper its economy, it merely pumped 3 trillion euro into a situation that was never going to work and now that 750 million Europeans realise that they all need to pay 4,000 Euro just to stay where they are right now, that is angering more and more Europeans. the French were warned ahead, yet they decided to have faith in an investment banker above a member of Front Nationale, Italy was not waiting and is now in a stage of something close to civil unrest, which will not help anyone either. Yet the economic setting for Italy could take a much deeper dive and not in a good way. The bigger issue is not just that Carlo Cottarelli is a former International Monetary Fund director. It is that there are more and more issues shown that the dangers are rising, not stabilising or subsiding and that is where someone optionally told President Sergio Mattarella to stop this at all costs. Part of this was seen in April (at https://www.agoravox.fr/actualites/economie/article/a-quand-l-eclatement-de-la-203577). Now the article is in French, so there is that, but it comes down to: “Bridgewater, the largest hedge fund (investment fund – manages $ 160 billion of assets) of the world has put $ 22 billion against the euro area  : the positions down (“sellers”) of the fund prove it bet against many European (Airbus), German (Siemens, Deutsche Bank) French (Total, BNP Paribas) and Italian (Intesa Sanpaolo, Enel and Eni) companies, among others. The company is not known to tackle particular companies, but rather to bet on the health of the economy in general“. So there is a partial setting where the EU is now facing its own version that we saw in the cinema in 2015 with The Big Short. Now after we read the Intro, we need to see the real deal. It is seen with “Since 2011, € 4 billion has been injected into the euro zone (that is to say into commercial banks) by the European Central Bank (ECB), which represents more than a third of the region’s GDP. The majority of this currency is mainly in Germany and Luxembourg, which, you will agree, are not the most difficult of the area. More seriously, much of this liquidity has not financed the real economy through credit to individuals and businesses. Instead, the commercial banks have saved € 2,000bn of this fresh money on their account at the ECB until the end of 2017 (against € 300bn at the beginning of 2011) to “respect their liquidity ratio” (to have enough deposit in liquid currency crisis).As in the United States, quantitative easing allowed the central bank to bail out private banks by buying back their debts. In other words, the debts of the private sector are paid by the taxpayer without any return on investment. At the same time, François Villeroy de Galhau, governor of the Banque de France, called for less regulation and more bank mergers and acquisitions in the EU, using the US banking sector as a model.” Here we see in the article by Géopolitique Profonde that the setting of a dangerous situation is escalating, because we aren’t in it for a mere 4 billion, the Eurozone is in it for €3,000 billion. An amount that surpasses the economic value of several Euro block nations, which is almost impossible to keep with the UK moving away, if Italy does the same thing, the party ends right quick with no options and no way to keep the Euro stable or at its levels, it becomes a currency at a value that is merely half the value of the Yen, wiping out retirement funds, loan balances and credit scores overnight. The final part is seen with “The ECB also warns that the Eurozone risks squarely bursting into the next crisis if it is not strengthened. In other words, Member States have to reform their economies by then, create budget margins and integrate markets and services at the zone level to better absorb potential losses without using taxpayers. A fiscal instrument such as a euro zone budget controlled by a European finance minister, as defended by President Emmanuel Macron, would also help cope with a major economic shock that seems inevitable. Suffice to say that this is problematic given the lack of consensus on the subject and in particular a German reluctance. The European Central Bank has issued the idea late 2017, long planned by serious economists, to abolish the limit of € 100,000 guaranteed in case of rescue operation or bankruptcy bank (Facts & Document No. 443, 15/11 / 17-15 / 12/17 p.8 and 9)” (the original article has a lot more, so please read it!

It now also shows (read: implies) a second part not seen before, with ‘The European Central Bank has issued the idea late 2017, long planned by serious economists, to abolish the limit of € 100,000 guaranteed in case of rescue operation or bankruptcy bank‘, it implies that Emmanuel Macron must have been prepped on a much higher level and he did not merely come at the 11th hour, ‘the idea issued late 2017’ means that it was already in motion for consideration no later than 2016, so when Marine Le Pen was gaining and ended up as a finalist, the ECB must have really panicked, it implies that Emmanuel Macron was a contingency plan in case the entire mess went tits up and it basically did. Now they need to do it again under the eyes of scrutiny from anti-EU groups whilst Italy is in a mess that could double down on the dangers and risks that the EU is facing. That part is also a consideration when we see the quote by Hans-Werner Sinn who is currently the President of the Ifo Institute for Economic Research, gives us “I do not know if the euro will last in the long run, but its operating system is doomed“, yet that must give the EU people in Brussels the strength they need to actually fix their system (no, they won’t). The question becomes how far will the ECB go to keep the Eurozone ‘enabled’ whilst taking away the options from national political parties? that is the question that matters, because that is at play, even as Germany is now opposing reforms, mainly because Germany ended up in a good place after they enforced austerity when it would work and that worked, the Germans have Angela Merkel to thank for that, yet the other nations (like 24 of them), ignored all the signs and decided to listen to economic forecast people pretending to be native American Shamans, telling them that they can make it rain on command, a concept that did not really quite pan out did it? Now the reforms are pushed because there were stupid people ignoring the signs and not acting preventively when they could, now the Eurozone is willing to cater to two dozen demented economists, whilst pissing off the one economy that tighten the belt many years ago to avoid what is happening right now. You see, when the reform goes through Berlin gets confronted with a risk-sharing plan and ends up shouldering the largest proportion of such a machine, that mechanism will avoid the embarrassment of those two dozen Dumbo’s (aka: numnuts, or more academically stated ‘someone who regularly botches a job, event, or situation’), whilst those people are reselling their idea as ‘I have a way where you need not pay any taxes at all‘ to large corporations getting an annual 7 figure income for another 3-7 years. How is that acceptable or fair?

So we are about to see a different Euro, one losing value due to QE, due to Italian unrest and against banks that have pushed their margins in the way US banks have them, meaning that the next 2 years we will most likely see off the wall bonus levels for bankers surpassing those from Wall Street likely for the first time in history, at the end of that rainbow, those having money in Europe might not have that much left. I admit that this is pure speculation from my part, yet when you see the elements and the settings of the banks, how wrong do you think I will be in 2019-2020?

So when we go back to the Guardian article at the beginning and we take a look at two quotes, the first “As the European commission unveiled its economic advice to member states last week, the body’s finance commissioner, Pierre Moscovici, said he was hoping for “cooperation on the basis of dialogue, respect and mutual trust”“. I go with ‘What trust?‘ and in addition with ‘cooperation on the basis of dialogue merely implies that Pierre Moscovici is more likely not to answer question and bullshit his way around the issue‘ and as former French Minister of Economy he could do it, he saw Mark Zuckerberg get through a European meeting never answering any questions and he reckons he is at least as intelligent as Mark Zuckerberg. when we see “Cecilia Malmstöm, said “there are some things there that are worrying” about Italy’s incoming government“, she sees right, the current Italy is actually a lot less Euro minded than the setting was in 2016-2017, so there is a setting of decreased trust that was never properly dealt with, the EU commissions left that untended for too long and now they have an even larger issue to face. So that bright Svenska Flicka is seeing the issues rise on a nearly hourly basis and even as we see the play go nice for now, they will change. I think that in this Matteo Salvini played the game wrong, instead of altering an alternative for Paolo Savona and replace him after Sergio Mattarella is not re-elected, the game could have continued, now they are busting head to head where Matteo is nowhere near as experienced as Sergio is, so that is a fight he is unlikely to win, unless he drops Italy on a stage of civil unrest, which is not a good setting for either player.

We cannot tell what will happen next, but for the near future (June-September), it is unlikely to be a pretty setting, we will need to take another look at the Italian economic setting when the dust settles.

 

Leave a comment

Filed under Finance, Media, Politics

Grand Determination to Public Relation

It was given yesterday, but it started earlier, it has been going on for a little while now and some people are just not happy about it all. We see this (at https://www.theguardian.com/technology/2018/may/25/facebook-google-gdpr-complaints-eu-consumer-rights), with the setting ‘Facebook and Google targeted as first GDPR complaints filed‘, they would be the one of the initial companies. It is a surprise that Microsoft didn’t make the first two in all this, so they will likely get a legal awakening coming Monday. When we see “Users have been forced into agreeing new terms of service, says EU consumer rights body”, under such a setting it is even more surprising that Microsoft did not make the cut (for now). So when we see: “the companies have forced users into agreeing to new terms of service; in breach of the requirement in the law that such consent should be freely given. Max Schrems, the chair of Noyb, said: “Facebook has even blocked accounts of users who have not given consent. In the end users only had the choice to delete the account or hit the agree button – that’s not a free choice, it more reminds of a North Korean election process.”“, which is one way of putting it. The GDPR isd a monster comprised of well over 55,000 words, roughly 90 pages. The New York Times (at https://www.nytimes.com/2018/05/15/opinion/gdpr-europe-data-protection.html) stated it best almost two weeks ago when they gave us “The G.D.P.R. will give Europeans the right to data portability (allowing people, for example, to take their data from one social network to another) and the right not to be subject to decisions based on automated data processing (prohibiting, for example, the use of an algorithm to reject applicants for jobs or loans). Advocates seem to believe that the new law could replace a corporate-controlled internet with a digital democracy. There’s just one problem: No one understands the G.D.P.R.

That is not a good setting, it tends to allow for ambiguity on a much higher level and in light of privacy that has never been a good thing. So when we see “I learned that many scientists and data managers who will be subject to the law find it incomprehensible. They doubted that absolute compliance was even possible” we are introduced to the notion that our goose is truly cooked. The info is at https://www.eugdpr.org/key-changes.html, and when we dig deeper we get small issues like “GDPR makes its applicability very clear – it will apply to the processing of personal data by controllers and processors in the EU, regardless of whether the processing takes place in the EU or not“, and when we see “Consent must be clear and distinguishable from other matters and provided in an intelligible and easily accessible form, using clear and plain language. It must be as easy to withdraw consent as it is to give it” we tend to expect progress and a positive wave, so when we consider Article 21 paragraph 6, where we see: “Where personal data are processed for scientific or historical research purposes or statistical purposes pursuant to Article 89(1), the data subject, on grounds relating to his or her particular situation, shall have the right to object to processing of personal data concerning him or her, unless the processing is necessary for the performance of a task carried out for reasons of public interest“, it reflects on Article 89 paragraph 1, now we have ourselves a ballgame. You see, there is plenty of media that fall in that category, there is plenty of ‘Public Interest‘, yet when we take a look at that article 89, we see: “Processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, shall be subject to appropriate safeguards, in accordance with this Regulation, for the rights and freedoms of the data subject.“, so what exactly are ‘appropriate safeguards‘ and who monitors them, or who decided on what is an appropriate safeguard? We also see “those safeguards shall ensure that technical and organisational measures are in place in particular in order to ensure respect for the principle of data minimisation“, you merely have to look at market research and data manipulation to see that not happening any day soon. Merely setting out demographics and their statistics makes minimisation an issue often enough. We get a partial answer in the final setting “Those measures may include pseudonymisation provided that those purposes can be fulfilled in that manner. Where those purposes can be fulfilled by further processing which does not permit or no longer permits the identification of data subjects, those purposes shall be fulfilled in that manner.” Yet pseudonymisation is not all it is cracked up to be, When we consider the image (at http://theconversation.com/gdpr-ground-zero-for-a-more-trusted-secure-internet-95951), Consider the simple example of the NHS, as a patient is admitted to more than one hospital over a time period, that research is no longer reliable as the same person would end up with multiple Pseudonym numbers, making the process a lot less accurate, OK, I admit ‘a lot less‘ is overstated in this case, yet is that still the case when it is on another subject, like office home travel analyses? What happens when we see royalty cards, membership cards and student card issues? At that point, their anonymity is a lot less guaranteed, more important, we can accept that those firms will bend over backward to do the right thing, yet at what state is anonymisation expected and what is the minimum degree here? Certainly not before the final reports are done, at that point, what happens when the computer gets hacked? What was exactly an adequate safeguard at that point?

Article 22 is even more fun to consider in light of banks. So when we see: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“, when a person applies for a bank loan, a person interacts and enters the data, when that banker gets the results and we no longer see a approved/denied, but a scale and the banker states ‘Under these conditions I do not see a loan to be a viable option for you, I am so sorry to give you this bad news‘, so at what point was it a solely automated decision? Telling the story, or given the story based on a credit score, where is it automated and can that be proven?

But fear not, paragraph 2 gives us “is necessary for entering into, or performance of, a contract between the data subject and a data controller;” like applying for a bank loan for example. So when is it an issue, when you are being profiled for a job? When exactly can that be proven that this is done to yourself? And at what point will we see all companies reverting to the Apple approach? You no longer get a rejection, no! You merely are not the best fit at present time.

Paragraph 2c of that article is even funnier. So when I see the exception “is based on the data subject’s explicit consent“, We cannot offer you the job until you passed certain requirements that forces us to make a few checks, to proceed in the job application, you will have to give your explicit consent. Are you willing to do that at this time? When it is about a job, how many people will say no? I reckon the one extreme case is dopey the dwarf not explicitly consenting to drug testing for all the imaginable reasons.

And in all this, the NY Times is on my side, as we see “the regulation is intentionally ambiguous, representing a series of compromises. It promises to ease restrictions on data flows while allowing citizens to control their personal data, and to spur European economic growth while protecting the right to privacy. It skirts over possible differences between current and future technologies by using broad principles“, I do see a positive point, when this collapses (read: falls over might be a better term), when we see the EU having more and more issues trying to get a global growth the data restrictions could potentially set a level of discrimination for those inside and outside the EU, making it no longer an issue. What do you think happens when EU people get a massive boost of options under LinkedIn and this setting is not allowed on a global scale, how long until we see another channel that remains open and non-ambiguous? I do not know the answer; I am merely posing the question. I don’t think that the GDPR is a bad thing; I merely think that clarity should have been at the core of it all and that is the part that is missing. In the end the NY Times gives us a golden setting, with “we need more research that looks carefully at how personal data is collected and by whom, and how those people make decisions about data protection. Policymakers should use such studies as a basis for developing empirically grounded, practical rules“, that makes perfect sense and in that, we could see the start, there is every chance that we will see a GDPRv2 no later than early 2019, before 5G hits the ground, at that point the GDPR could end up being a charter that is globally accepted, which makes up for all the flaws we see, or the flaws we think we see, at present.

The final part we see in Fortune (at http://fortune.com/2018/05/25/ai-machine-learning-privacy-gdpr/), you see, even as we think we have cornered it with ‘AI Has a Big Privacy Problem and Europe’s New Data Protection Law Is About to Expose It‘, we need to take one step back, it is not about the AI, it is about machine learning, which is not the same thing. With Machine learning it is about big data, see when we realise that “Big data challenges purpose limitation, data minimization and data retention–most people never get rid of it with big data,” said Edwards. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting… Algorithmic transparency means you can see how the decision is reached, but you can’t with [machine-learning] systems because it’s not rule-based software“, we get the first whiff of “When they collect personal data, companies have to say what it will be used for, and not use it for anything else“, so the criminal will not allow us to keep their personal data, to the system cannot act to create a profile to trap the fraud driven individual as there is no data to learn when fraud is being committed, a real win for organised crime, even if I say so myself. In addition, the statement “If personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process“, which comes close to a near impossibility. In the age where development of AI and using machine learning to get there, the EU just pushed themselves out of the race as they will not have any data to progress with, how is that for a Monday morning wakeup call?

 

Leave a comment

Filed under IT, Law, Media, Politics, Science

Interaction

Today is part on what happened, what we see now and something from the past. It started yesterday when the Guardian (at https://www.theguardian.com/politics/2018/may/17/vote-leave-strategist-dominic-cummings-refuses-to-appear-before-mps) gave us “The chief strategist of the Vote Leave campaign has refused to appear in front of MPs, risking possible censure from the House of Commons but also raising questions about what more can be done when a witness ignores the will of parliament“. Apart from the folly of his action, there are other questions beneath the surface and they must be answered. Now, for the record, I have been in favour of Brexit! I have my reasons and I will introduce you to some of them. When I see “Dominic Cummings, who has been credited as the brains behind the successful Brexit campaign, told the select committee investigating fake news that he would not be willing to answer questions in public before the Electoral Commission finishes its ongoing investigation into his campaign” I do see a valid concern and even as I called it folly, which it partially remains, there is the setting that these MP’s need to come in front of the camera as well. I have serious questions from these MP’s and if they cannot answer them to MY satisfaction, they should be removed from office, it is THAT simple.

When I see that the leave groups have connections to Cambridge Analytica, I have questions as well. Even as we see “questions about the use of Facebook data during the EU referendum campaign“, we need to make certain that we are not caught on the rings of misinformation and that is happening on both sides of the isle in this case.

You see, to get to the core of it we need to look at the entire mess. Some are still willing to blame it all on Nigel Farage, but it goes deeper. He brought something to light, the issue is that we have had a massive amount of question marks before it started and that remains in the dark. The corrupt and the exploitative never want the limelight. The fact that Nigel brought to light issues on a larger scale needs to be commended. For the longer time, there had been an issue. Even as there was such a large level of positivity in 1975, by 2016 there was not much positivity left, the numbers show a degradation of the interest in being part of Europe. We see all those messages and news casts on how good things are, yet were they? Apart from the large corporations having benefits which did not go beyond the board of directors and senior sales staff having ‘training’ sessions in sunny places, the wheels of the system continued by the workers, by the support systems and the logistics who never saw anything in support return with the optional getting wasted evening on a Christmas party, that was the extent of the appreciation given. When we look at the issues from 2004 onwards we saw stagnation and until 2017 we saw no improved quality of life, whilst bills went up and incomes froze. In all this we see not an increase of living and future, merely a setting of getting by at best. That was never a good setting. So as we consider that the UK had EU costs. Some state “But the UK actually paid around £275 million a week in 2014 and paid around £250 million a week in 2016“, we also see (at https://fullfact.org/europe/our-eu-membership-fee-55-million/) a few additional numbers. The numbers look nice, but they leave us with all kinds of questions and the mistrust grows as we are not offered any clarity. It is largely seen with “the EU spent nearly £5 billion on the public sector“, would that not have happened if the UK was not part of the EU? We also see “Extra money not counted here, goes directly to the private sector“, is that perhaps merely commerce? When we see the ‘gravy trains‘ running in Europe on how some ‘elected’ officials make 10 times the average income, questions come to the surface and the EU has never given proper response that is one part that has been setting people off. It becomes even worse when we see ‘Different figures from different sources‘ with the part “The Treasury and ONS both publish figures on the subject, but they’re slightly different. The ONS also publishes other figures on contributions to EU institutions which don’t include all our payments or receipts, which complicates matters“, it is not the ‘complications’ it is the lack of clarity and transparency, transparency has been an issue for the longest time in the EU and the people have had enough. The UK has seen close to no benefit to the EU, only the large corporations have benefited, those who need to work internationally anyway, so 1,500 corporations have a benefit and 150,000 do not and that is a visible setting that the UK faced. Even as we see ‘open borders‘, the fact that well over 60% has not been able to afford vacations for many years see no benefit, the setting had become too surreal. In all this we also need to realise that setting that the ECB have given all involved, whilst everyone keeps quiet that the taxpayer gets the bill. Everyone is seeing this fabric of illusion call quantative easing. Mario Draghi as head of the ECB had instigated a setting TWICE on this spending a trillion the first time and almost double that the second time around, so when you spend €3,000,000,000,000 do you think there will not be any invoice? Do you think that this money is printed and forgotten? No, it impacts all within the Euro, as money loses value you must pay more, you must pay longer and there is nothing you can do on this. Non-elected official spend that much money and they are not held accountable to any extent. In what I personally call a setting of corruption, this Mario Draghi was in a group of exclusive bankers (G30 bankers) and there was a question on it ONCE! There was no response and the media merely let it go, the media that is all up in arms on the freedom of speech did NOTHING! They let it slip away, how can we ever agree to be part of such a setting?

We have given away the quality of life and we are letting this go, in that regard Nigel Farage was perfectly correct, we are better of outside of the EU. The moment we heard this we got a lot more than a few ruffled feathers. Banks started threatening to move away, the same screwed up individuals who bolstered massive profits in bonuses as our lives faded in 2009; they are all about the gravy train. Why should anyone support this?

Now we get a new setting, with Cambridge Analytica, people woke up! I warned many people for well over 4 years, but they were all about ‘the government should not spy on us, we have a right to privacy‘, those same individuals got played in Facebook, pressed on fear, pressed on choices and like lambs they went to the slaughter and no one ‘blahed’ like the sheep they were. Yet there is a setting that is now in the open. When we act on fake news, is that fraud? The news was not asking us to jump, the people at large merely did and now they are crying fowl (pun intended), the turkeys got the sauce and now realised that they were going to dinner, yet they were the meal, to the ones getting fed.

So now we go back to the first setting. We have two issues; the first is the investigation from the Electoral Commission. That investigation is still ongoing, so why exactly is the digital, culture, media and sport committee rolling over that event? When we see the quote “lawyers had told him to “keep my trap shut” until the Electoral Commission completes its investigation into Vote Leave this summer“, I tend to fall behind Dominic Cummings in all this. When we look at parliament and specifically the ‘Digital, Culture, Media and Sport Committee‘, I personally come with the blunt and direct question (and as politically incorrect as possible) with the question to the conservative members Damian Collins (Chair), Simon Hart, Julian Knight, Rebecca Pow and Giles Watling. In addition also to the Labour members Julie Elliott, Paul Farrelly, Ian C. Lucas, Christian Matheson, Jo Stevens as well as Brendan O’Hara from the SNP. My question would be: ‘Who the fuck do you think you are interfering with an investigation by the Electoral Commission?‘, I might get shut down that they have a perfect right, but in all this, the overlap, this does not add up well. This is about interfering, creating opportunity perhaps? We can all agree that there are issue, that there are coincidences, yet with the exception of the Scottish and Welsh member, they are all from Brexit constituencies, I think that this bad news is going to their heads, and serious questions need to be asked by the media regarding a committee that is what I call clear interfering with an electoral investigation. Is that not a valid question? Oh, and for the number, you can check that at http://www.bbc.com/news/politics/eu_referendum/results.

the other quote we need to consider is “It is the second time this week that a potential witness has turned down a formal summons to answer questions from MPs, after Facebook’s Mark Zuckerberg turned down a request from the same committee“, so why are they, trying to get Mark Zuckerberg in the ‘dock’? Do they need the limelight? What silly questions could they ask that the US senate could not come up with? Another quote from Dominic Cummings was “He said he had been willing to give evidence to the committee after this date, but the MPs’ decision to issue a formal summons via the media showed their priority was “grandstanding PR, not truth-seeking”” and I tend to agree with that.

When I look at two publications, the first being “The potential impact of Brexit on the creative industries, tourism and the digital single market“, I see issues, I seem them as personal issues, merely on what I have personally witnessed over the years that I have visited England. The first is “There is a phrase people like to use, “Locals selling to locals”. It does not matter whether it is the box office or the Royal Opera House or whether it is the distribution department of a television company selling finished programmes or formats, you need multilingual, multicultural teams to sell great British content around the world or to sell great British culture to tourists who come“, which might be true as a setting, yet in practicality? This is about local selling skills, how many grocers are hiring foreigners to sell a great cabbage? I also have an issue with Deirdre Wells, Chief Executive of UKinbound. She gives us that she employed; “70% EU nationals in their London office so they can communicate with the outbound operators in Germany, France and Italy and create those sorts of business deals in their own languages—that is still primarily how business is done. They need those language skills with skilled operations staff who can work with their clients overseas to be able to put these packages together“, which is interesting as most metropolitan Europeans speak English, in the Netherlands, Sweden, Denmark and Norway that language skill is way above average. Now, we can accept that language skills are important, yet when I see the footnote (16) and I look there, we see: “16 Q63“, I wonder what Q63 actually was, it goes a little further when we consider the issue given with item 31, where we see “Visit Britain emphasised the dearth (meaning lack of skill) of language skills available to tourism and hospitality businesses and compared the lack of skills affecting tourism with the IT skills required by the wider business community: In a 2013 survey of businesses by the Confederation of British Industry only 36% were satisfied with their employees’ language skills, compared with 93% who were satisfied or very satisfied with school and college leavers’ skills in the use of IT.“, here we see a reference to ‘IOB 027 p6‘ (at http://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/culture-media-and-sport-committee/impact-of-brexit/written/42076.pdf), the paper gives a good view, yet it lacks a view of the Total EU compared to the rest of the world, when we see mention of “70% of respondents agreed that ‘the weak pound makes it a good time to visit Britain. This was highest in China (85%) and the US (78%)“, so if that is important, how large a slice of the cake do they represent? In light of that connection we need to see how important the EU slice is, if we are looking at a margin compared to the US and China, why are we bothering over the crumbs? At present we cannot tell, because it is missing, which tends to imply that the impact is not as large as expected, because I am (roughly) 89.4335% certain that if it was massive (compared to China and US) it would have been mentioned clearly and shown in some kind of Pecan Pie setting. [42076]

The second setting is seen in ‘Facebook written evidence‘ as published 26th April 2018 [attached]. Here we see in regards to This Is Your Digital LifeWhen an advertiser runs an ad campaign on Facebook one way they can target their ads is to use a list of email addresses (such as customers who signed up to their mailing list). AIQ used this method for many of their advertising campaigns during the Referendum. The data gathered through the TIYDL app did not include the email addresses of app installers or their friends“, which make the plot thicken, in addition we see “We also conducted an analysis of the audiences targeted by AIQ in its Referendum-related ads, on the one hand, and UK user data potentially collected by TIYDL, on the other hand, and found very little overlap (fewer than 4% of people were common to both data sets, which is the same overlap we would find with random chance)“, so at this point, I see no actual need to invite Dominic Cummings at all, or better stated, inviting him before the Electoral Commission finishes its report, it seems that certain members like the limelight a little too much. In addition we are treated to: “Our records show that AIQ spent approximately $2M USD on ads from pages that appear to be associated with the 2016 Referendum. We have provided details on the specific campaigns and related spending to the ICO and Electoral Commission. In the course of our ongoing review, we also found certain billing and administration connections between SCL/Cambridge Analytica and AIQ. We have shared that information with ICO for the purposes of their investigation“, it merely makes me wonder more on things being done twice at the same time, if there is validity to this, I cannot see it at present, at least not until the Electoral Commission is published. It makes perfect sense to scrutinise the findings to some degree, but to give two summaries at the same time overlapping one another is merely a way to diminish factuality and muddy transparency as I see it. Written-evidence-Facebook

In this, Yahoo had an interesting article last year at https://uk.finance.yahoo.com/news/brexit-remain-campaign-struggled-grasp-145100601.html), herer we see M&C Saatchi give us: “The downfall of the “Remain” campaign during Brexit was due to its inability to understand the electorate, according to the advertising chief enlisted to run the campaign. M&C Saatchi’s worldwide chief executive, Moray MacLennan told CNBC in the latest episode of Life Hacks Live, how M&C Saatchi’s unsuccessful Remain campaign struggled to grasp what the British people were really thinking about. “Everyone thought it was about leaving the European Union. I’m not sure it was. It wasn’t about that. It was about something else.”“, this is important as chair holder Damian Collins used to work for M&C Saatchi, so for the chair to take notice of his friends (if he has any), might not have been the worst idea. in that light, we see that there are issues that plague the British mind, yet the Remain Group never figured out what it was, which now gives light to all but to (Wales and Scotland) ended up with a ‘leaving’ constituency. It seems to be a mere example of a flaming frying pan, and no lid to stop the flames. In that, in light of the fact that M&C Saatchi tends to be terribly expensive, I wonder who funded that part of the deal, is that not a fair questions too?

As I see it, Hannah White, of the Institute for Government states it best when we see “Every time everyone observers the emperor has no clothes, in that parliament can’t force people to come, they lose a little bit of their authority“, which is an awesome revelation, so as we witness levels of interaction, whilst we are realising that the players should have known a lot better than what we are witnessing gives rise to other matters. What matters that they are why they are larger than you think remains a speculation to some degree and we all will have our own ideas on that. Yet without clear and accurate data it is merely speculation and we should not depend on speculation too much, should we?

Or perhaps when we consider ‘Dominic Cummings, who has been credited as the brains behind the successful Brexit campaign‘, we might, in light of the Moray MacLennan disclosure consider that Dominic Cummings comprehended the voters and Will Straw (the opposing team leader) did not, we need to realise that wars have been lost with a smaller disadvantage like that, so the Remain group might merely have themselves to blame for all this. If interaction is about communicating, we can deduce that not properly communicating was the cause, and in this the grandstanding by the Digital, Culture, Media and Sport Committee will not help any, will it?

 

Leave a comment

Filed under Finance, IT, Media, Politics