Tag Archives: Quantum Computing

Ignoring the centre of the pie

That is the setting that I saw when I took notice of ‘Will quantum be bigger than AI?’ (at https://www.bbc.com/news/articles/c04gvx7egw5o) now there is no real blame to show here. There is no blame on Zoe Kleinman (she is an editor). As I personally see it, we have no AI. What we have is DML and LLM (and combinations of the two), they are great and great tools and they can get a whole lot done, but it is not AI. Why do I feel this way? The only real version of AI was the one Alan Turing introduced us to and we are not there yet. Three components are missing. The first is Quantum Processing. We have that, but it is still in its infancy. The few true Quantum systems there are are in the hands of Google, IBM and I reckon Microsoft. I have no idea who leads this field but these are the players. Still they need a few things. In the first setting Shallow Circuits needs to be evolved. As far as I know (which is not much) is that it is still evolving. So what is a shallow circuit. Well, you have a number of steps to degrade the process. The larger the process, the larger the steps. Shallow circuits makes this easier. To put it in layman’s terms. The process doesn’t grow, it is simplified. 

To put this in perspective, lets take another look. In the 90’s we had Btree+ trees. In that setting, lets say we have a register with a million entries. In Btree it goes to the 50% marker, was the record we needed further or less than that. Then it takes half go that and does the same query. So as one system (like DBase3+ goes from start to finish), Btree goes 0 to 500,000 to 750,000 to 625,000. As such in 4 steps it passed through 624999 records. This is the speediest setting and it is not foolproof, that record setting is a monster to maintain, but it had benefits. Shallow Circuits has roughly the same benefits (if you want to read up to this, there is something at https://qutech.nl/wp-content/uploads/2018/02/m1-koenig.pdf) it was a collaboration of Robert König with Sergey Bravyi and David Gosset in 2018. And the gist of it is given through “Many locality constraints on 2D HLF-solving circuits” where “A classical circuit which solves the 2D HLF must satisfy all such cycle relations” and the stage becomes “We show that constant-depth locality is incompatible with these constraints” and now you get the first setting that these AI’s we see out there aren’t real AI’s and that will be the start of several class actions in 2026 (as I personally see it) and as far as I can tell, large law firms are suiting up for this as these are potentially trillion dollar money makers (see this as 5 times $200B) as such law firms are on board, for defense and for prosecution, you see, there is another step missing, two steps actually. The first is that this requires a new operating system, one that enables the use of the Epsilon Particle. You see, it will be the end of Binary computation and the beginning of Trinary computations which are essential to True AI (I am adopting this phrase to stop confusion) You see, the world is no really Yes/No (or True/False), that is not how True AI or nature works. We merely adopted this setting decades ago, because that was what there was and IBM got us there. You see, there is one step missing and it is seen in the setting NULL,TRUE,FALSE,BOTH. NULL is that there are no interactions, the action is FALSE, TRUE or BOTH, that is a valid setting and the people who claim bravely (might be stupidly) that they can do this are the first to fall into these losing class actions. The quantum chip can deal with the premise, but the OS it deals with needs to have a trinary setting to deal with the BOTH option and that is where the horse is currently absent. As I see it, that stage is likely a decade away (but I could be wrong and I have no idea where IBM is in that setting as the paper is almost a decade old. 

But that is the setting I see, so when we go back to the BBC with “AI’s value is forecast in the trillions. But they both live under the shadow of hype and the bursting of bubbles. “I used to believe that quantum computing was the most-hyped technology until the AI craze emerged,” jokes Mr Hopkins.” Fair view, but as I see it the AI bible is a real bubble with all the dangers it holds as AI isn’t real (at present), Quantum is a real deal and only a few can afford it (hence IBM, Google, Microsoft) and the people who can afford such a system (apart from these companies) are Mark Zuckerberg, Elon Musk, Sergei Brin and Larry Ellison (as far as I know) because a real quantum computer takes up a truckload of energy and the processor (and storage are massively expensive, how expensive? Well I don’t think Aramco could afford it, now without dropping a few projects along the way. So you need to be THAT rich to say the least. To give another frame of reference “Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest super computers 10 septillion years – or 10,000,000,000,000,000,000,000,000 years – to complete.” And that is the setting for True AI, but in this the programming isn’t even close to ready, because this is all problem by problem all whilst a True AI (like V.I.K.I. in I Robot) can juggle all these problems in an instant. As I personally see it, that setting is decades away and that is if the previous steps are dealt with. Even as I oppose the thought “Analysts warned some key quantum stocks could fall by up to 62%” as there is nothing wrong with Quantum computing, as I see its it is the expectations of the shareholders who are likely wrong. Quantum is solid, but it is a niche without a paddock. Still, whomever holds the Quantum reigns will be the first one to hold a true AI and that is worth the worries and the profits that follow. 

So as I see this article as an eye opener, I don’t really see eye to eye on this side. The writer did nothing wrong. So whilst we might see that Elon Musk was right stating “This week Elon Musk suggested on X that quantum computing would run best on the “permanently shadowed craters of the moon”.” That might work with super magnet drives, quantum locking and a few other settings on the edge of the dark side of the moon, I see some ‘play’ on this, but I have no idea how far this is set and what the data storage systems are (at present) and that is the larger equation here. Because as I see it, trinary data can not be stored on binary data carriers, no matter who cool it is with liquid nitrogen. And that is at the centre of the pie. How to store it all because like the energy constraints, the processing constraints, the tech firms did not really elaborate on this, did they? So how far that is is anyones guess, but I personally would consider (at present, and uneducated) that IBM to be the ruling king of the storage systems. But that might be wrong.

So have a great day and consider where your money is, because when these class actions hit, someone wins and it is most likely the lawyer that collects the fees, the rest will lose just like any other player in that town. So how do you like your coffee at present and do you want a normal cup or a quantum thermal?

Leave a comment

Filed under Finance, IT, Law, Media, Politics, Science

Just like Soap

Perhaps you remember the 80’s series soap. Someone made a sitcom of the most hilarious settings and took it up a notch, the series was called soap and people loved it, it did nearly everything right, but over time this bubble went, just like all the other soap bubbles tend to go and that is OK, the made their mark and we felt fine. There is another bubble. It is not as good. There is the mortgage bubble, the housing bubble (they were not the same), the economy bubble and all these bubbles come with an aftermath. Now we see the AI bubble and I predicted this as early as January 29th of this year in ‘And the bubble said ‘Bang’’ (at https://lawlordtobe.com/2025/01/29/and-the-bubble-said-bang/) and my setting is that AI does not yet exist, as I saw it, for the most, it is the construct of lazy salespeople who couldn’t be bothered to do their work and created the AI ‘Fab’ and hauled it over to fit their needs. Let’s be clear. There is no AI and when I use it I know that ‘the best’ I am doing is avoid a long discussion about how great DML and LLM are, because they are and it is amazing. And as these settings are correctly used, it will create millions if not billions in revenue. I got the idea to overhaul the Amazon system and let them optionally create online panels that could bank them billions, which I did in ‘Under Conceptual Construction’ (at https://lawlordtobe.com/2025/10/10/under-conceptual-construction/) and ‘Prolonging the idea’ (at https://lawlordtobe.com/2025/10/12/prolonging-the-idea/) which I wrote yesterday (almost 16 hours ago). I also gave light to an amazing lost and found idea which would cater to the needs of Airports and bus terminals. I saw that presentation and it was an amazing setting in what I still call NIP (Near Intelligent Parsing) in ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) these are mere settings and they could be market changes. This is the proper use of IT to the next setting of automation. But the underlying bubble still exists, I merely don’t feed that beast, so when the BBC last night gave us all ‘‘It’s going to be really bad’: Fears over AI bubble bursting grow in Silicon Valley’ almost 2 days ago (at https://www.bbc.com/news/articles/cz69qy760weo) I saw the sparkly setting of soap bubbles erupt and I thought ‘That did not take long’. My setting was that AI (the real AI as Alan Turing saw it) was not ready yet. The small setting that at least three parts in IT did not yet exist. There is the true power of Quantum computing and as I see it quantum computers are real, but they are in the early stages of development and are not yet as powerful as future versions should be and for that, so as IBM rolls out their second system on the IBM Heron platform, we are getting there. It is called the IBM’s 156-qubit IBM Quantum Heron, just don’t get your hopes up, not too many can afford that platform. IBM keels it modes and gives us that “The computer, called Starling, is set to launch by 2029. The quantum computer will reside in IBM’s new quantum data center in upstate New York and is expected to perform 20,000 more operations than today’s quantum computers” I am not holding me credit card to account to that beauty. If at all possible, the only two people on the planet that can afford that setting are Elon Musk and Larry Ellison and Larry might buy it to see Oracle power at actual quantum speed and he will do it, to see quantum speed came to him in his lifetime. The man is 81 after all (so, he is no longer a teenager), If I had that kind of money (250,000 million) I would do it to, just so to see what this world has achieved. But the article (the BBC one) gives us ““I know it’s tempting to write the bubble story,” Mr Altman told me as he sat flanked by his top lieutenants. “In fact, there are many parts of AI that I think are kind of bubbly right now.”

In Silicon Valley, the debate over whether AI companies are overvalued has taken on a new urgency. Skeptics are privately – and some now publicly – asking whether the rapid rise in the value of AI tech companies may be, at least in part, the result of what they call “financial engineering”.” And the BBC is not wrong, we had a write-off in January of a trillion dollars and a few days ago another one of 1.5 trillion dollars. I would be willing to call that ‘Financial Engineering’ and that rapid rise? Call it the greedy need of salespeople getting their audience in a frenzy 

I merely gave a few examples of what DML and LLM could achieve and getting a lost and found department set from weeks into minutes is quite the achievement and I reckon that places like JFK, Heathrow and Dubai Airport would jump at the chance to arrange a better lost and found department and they are not alone but one has to wonder how the market can write off trillions in merely two events. So when we get to

He is not wrong. Consider the next one amounting to a speculated two trillion (or $2,000,000,000,000) when it hits, it could wipe out retirement savings of nearly everyone for years. So how do you feel about your retirement being written off for decades? When you are 80+ and you have millions upon millions you are just fine and that is merely 2-5 people, the other 8,200,000,000 people? The young will be fine, and over 4 billion will be too young to care about their retirement, but the rest? Good luck I say.

So what will happen to Stargate ($500B) when that bubble goes? I already see it as a failure as the required power settings will not be able to fuel this, apart from the need of hundreds of validators and their systems require power too, then we see Microsoft thinking (and telling us) it is the next big thing, all whilst basic settings aren’t out yet. Did anyone see the need for Shallow Circuits? Or the applied versions of Leon Lederman? No one realizes that he held the foundational setting of AI in Quantum computing. You see (as I personally see it) AI cannot really work in Binary technology, it requires a trinary setting, a simple stage of True, False and Both. It would allow for trinary settings, because it isn’t always True or False, we learn that the hard way, but in IT we accept it. That setting will come to blow when we get to the real AI part of it and that is why I (in part) the AI coffee being served in all places. And I like my sarcasm really hot (with two raw sugar and full cream milk)

That is the setting we face and whilst some will call the BBC article ‘doom speak’ I see it for what it is, a reminder that the AI frenzy is sales driven and whilst people are eager to forget the simplest setting, the real deal of Microsoft and Builder.AI is simply the setting that at present we are confronted with IT engineers making the decisions for us and the amount of class actions coming to the world in 2027 and 2028 (optionally as early as 2026) and as some cases are drawn out even yesterday (see https://authorsguild.org/news/ai-class-action-lawsuits/ for details) you need to realise that this bubble was orchestrated and as such I like the term ‘Financial Engineering’ so be good and use the NIP setting properly and feel free to be creative, I was and gave Amazon an idea that could bank it billions. But not all ideas are golden and I am willing to see that I am not the carrier of golden ideas, the fact that someone saw the Lost and Found setting is proof of that.

Have a great day, I am 30 minutes from breakfast now, so off I go to brekkyville.

Leave a comment

Filed under Finance, IT, Media, Science

To all these sales losers

Yes, it sounds a little vindictive and that is where I am. So to get to this, we need to assess a few things and as always I do assess where I am. To set that stage, we need to see the elements. As I early as February 8th 2021 I have stated “AI does not exist” I did so in ‘Setting sun of reality’ (at https://lawlordtobe.com/2021/02/08/setting-sun-of-reality/)

I have done so several times since and as always I got the ‘feedback’ that I was stupid and that I didn’t understand things. I let it slide over and over again and today the BBC handed me my early Christmas present. They did so in ‘Powerful quantum computers in years not decades, says Microsoft’ (at https://www.bbc.com/news/articles/cj3e3252gj8o) where we find “But experts have told the BBC more data is needed before the significance of the new research – and its effect on quantum computing – can be fully assessed. Jensen Huang – boss of the leading chip firm, Nvidia – said in January he believed “very useful” quantum computing would come in 20 years.” In 20 years? I can happily report I will be dead by then. Yet the underlying setting is also true. If actual AI is depending on a quantum chip and fully explored shallow circuit technology, we can therefor presume that true AI is at least 20 years away. I believe that another setting is needed, but that is not here nor there at this point. 

Don’t get me wrong. What we have now is great, even of a phenomenal nature, but it is not AI. Deeper Machine Learning is becoming more and more groundbreaking. And the setting together with LLM is amazing, it just isn’t AI. Together with the Microsoft setting of ‘in years’ comes nice. In an age that hype settings are required, the need for annual redefinition of something it isn’t will upset massive amount of sales cycles. They will suddenly need to rely on whatever PR is running with marketing setting the tome of what becomes next. A new setting for sales I reckon.

I have some questions on the quote “Microsoft says this timetable can now be sped up because of the “transformative” progress it has made in developing the new chip involving a “topological conductor”, based on a new material it has produced.” My question comes from the presumption that this is untested and unverified. I am not debating that this is possible, but if it was the quote would include (along the lines of) “the data we have now confirms the forward strides we are making” as such the statement is to some degree ‘wishful thinking’ it isn’t set in verifiable rule yet. It seems that Travis Humble agrees with me as we also get “Travis Humble, director of the Quantum Science Center of Oak Ridge National Laboratory in the US, said he agreed Microsoft would now be able to deliver prototypes faster – but warned there remained work to do.” But the underground on this is set to a timeline that gives doubt to the set of Stargate and its $500 billion investment. Consider that the investment is coming over the next 4 years, all whilst ‘interesting’ quantum technology is 20 years away. So what will they do? Invest it again? Seems like a waste of 500 billion. In that case can I have 15 million of that pie? I need my pension investment in Toronto (apartment included). The larger setting of wasteful investment. Does Elon Musk know that there is 500 billion in funds being nearly wasted? 

And the simplest setting (for me) is also overlooked. It is seen in the quote “meaningful, industrial-scale problems in years, not decades”, that implies that there is no real AI at present. And my ego personally sees this as “Game, set and match for Lawrence”, as such all these sales dodo’s with their “You do not know what you are talking about” will suddenly avoid gazes and avoid me whilst they plan their next snappy come back. In the meantime I will leisurely relax whilst I contemplate this victory. It is the second step in my blog, the timeline shows what I wrote and when I wrote it. It could have gone the other way, but my degrees on the technology matter were clearly on my side.

And “Microsoft is approaching the problem differently to most of its rivals.”? Well, that is the benefit of taking another step, optionally innovative step in any technology. Microsoft cannot be wrong all the time and here they seemingly have a winner and that’s fair, they optionally get to win this time. 

In the setting of ego I start the day (at 04:30) decently happy. Time I had a good day too. As such there is nothing to do but to wait another 240 minutes to have breakfast. Better have a walk before then. Have a good or even better, a great day today.

Leave a comment

Filed under Finance, IT, Media, Politics

Is it a public service

There is a saying (that some adhere to). How often can you slap a big-tech company around for it to be regarded as personal pleasure instead of a public service? There is an answer, but I am not the proper source of that (and I partially disagree). Slapping Microsoft around tends to be a public service no matter how you slice it. Perhaps some people at 92, NE 36th St, Redmond, WA 98052 might start seeing this as their moment to clean up that soiled behemoth. Anyway this all started actually yesterday. I saw an article and I put it next to me. I had other ideas (like actual new IP ideas), but the article was still there this morning and I gave it another look.

The article (at https://www.computerweekly.com/news/366615892/Microsoft-UAE-power-deal-at-centre-of-US-plan-for-AI-supremacy) gives us ‘Microsoft UAE power deal at centre of US plan for AI supremacy’ was hilarious for two reasons. The first is one that academics can agree on There is not (yet) such a setting like AI (Artificial Intelligence) and personally I am smirking at the idea that Microsoft can actually spell the word correctly (howl of deriving laughter by silly old me). And the start of the article gives us “Microsoft has struck an artificial intelligence (AI) energy deal with United Arab Emirates (UAE) oil giant ADNOC after a year of extraordinary diplomacy in which it was the vehicle for a US strategy to prevent a Chinese military tech grab in the Gulf region.” In this I am having the grinning setting that this is one way to give oil supremacy to Aramco and that is merely the beginning of it. And the second was the line “a US strategy to prevent a Chinese military tech grab in the Gulf region” and it is my insight that this is a clicking clock. One tick, one tock leading to one mishap and Microsoft pretty much gives the store to China. And with that Aramco laughingly watches from the sidelines. There is no if in question. This becomes a mere shifting timeline and with every day that timeline becomes a lot more worrying. Now the fist question you should ask is “Could he be wrong?” And the answer is yes, I could be wrong. However the past settings of Microsoft shows me to be correct. And in this all, the funny part to see is that with the absence of AI, the line “a plan to become an AI superpower” becomes folly (at the very least). There are all kinds of spins out there and most are ludicrous. But several sources state “There are several reasons why General AI is not yet a reality. However, there are various theories as to what why: The required processing power doesn’t exist yet. As soon as we have more powerful machines (or quantum computing), our current algorithms will help us create a General AI” or to some extent. Marketing the spin of AI does not make it so. And Quantum computing is merely the start. Then we get the shallow circuit setting and as I personally call it the trinary operating system. You see, all computing is binary and the start of trinary is there. Some Dutch scientist was able to prove the trinary particle (the Ypsilon particle). You see that set in a real computing environment is the goal (for some). The trinary system creates the setting of a achievable real AI. The trinary system has for phases NULL, TRUE, FALSE and BOTH. It is the both part that binary systems cannot do yet, as such any deeper machine learning system is flawed by human interference (aka programming and data errors because of it). This is the timeline moment where we see the folly of Microsoft (et al). 

So then we get to “It also entrenches Microsoft’s place at the crux of the environmental crisis, pledging to help one of the world’s largest oil firms use AI to become a net-zero producer of carbon emissions, while getting help in return in building renewable energy sources to feed the unprecedented demand that the data-centres powering its AI services have for electricity.” OK, not much to say against. This is a business opportunity nicely worded by Microsoft. these are realistic goals that Deeper Machine Learning could do, but that pesky setting gets the novel approach where people (programmers) need to make calls and a call made in the name of AI, still doesn’t make that so. As such when that data error is found, the learning algorithms will need to be retrained. How much time lag does that give? And make no mistake ADNOC will not tolerate these level of errors. It amounts to billions a day and the oil business is cut throat. So when I state that Aramco is sitting on the sideline howling, I was not kidding. That is how I see this develop. Then we get “The same paradox was played out at the COP 28 climate conference in Dubai last December, while Microsoft prepared to ink a $1.5bn investment in UAE state-owned AI and data-centre conglomerate G42, where Sultan Ahmed Al Jaber, ADNOC oil chief, chaired a global agreement to ditch fossil fuels.” This is harder to oppose. It is pretty much an agreement between two parties. However I wonder how the responsibilities of Microsoft are voiced, because it will hang on that and perhaps Microsoft slipped one by ADNOC, but that is neither here or there. You don’t become chief of ADNOC without protecting that company so without the papers I cannot state this will get Microsoft in hot waters. However, I am certain that any boast towards ‘miscommunication’ will hand the stables, the farm and the livestock (aka oil) right in the hands of China. You see, people will focus on the $1.5 billion investment by Microsoft, yet I wonder how much (or how long) the errors are unspotted. That will be an error that could result into billions a day lost and that is something that Microsoft is unlikely to survive. Then there is the third player. You see America angered China with the steps they have taken in the past. And I have no doubt that China will be keeping an eye on all this and whilst some might want to ‘hide’ mishaps. China will be at the forefront of illuminating these mistakes. And these mistakes will rear their ugly heads. They always do and the track record of Microsoft is not that great (especially when millions scrutinise your acts). As such this is a like standing on a hill where the sand is kept stable on a blob of oil, until someone walks that it merely seems stable, the person walking there became the instability of it all. Not the most eloquent expression, but I think it works and Microsoft have been trodding too much already and now China feels grieved (not sure it is a valid feeling) but for China it matters and getting Microsoft to fail will be their only target. Well, that is it all from me and looking at how this will go, I have a nice amount of popcorn ready to watch two players slug it out. In the meantime Sultan Ahmed Al Jaber has merely one thought “Did I deserve what I about to unfold?” And I can’t answer that because it is depending on the papers he co-signed and I never saw these papers, so I cannot give an honest response to that.

Let’s see how this fight unfolds on the media, enjoy your day wherever you are (it is still Friday west of Ireland).

2 Comments

Filed under Finance, IT, Politics, Science

It was never rocket science

Yup, that is the gist of it. And it seems that people are starting to wake up. You see the biggest issue I have had with any mention of AI, is that it doesn’t (yet) exist. People can shout AI on every corner, but soon the realisation comes in that they were wrong all the time will hurt them, it will hurt them badly. And this is merely a sideline to the issue. The issue is Microsoft and lets get through some articles.

1. Microsoft says cyber-attack triggered latest outage
The first one is (at https://www.bbc.com/news/articles/c903e793w74o) where we see “It comes less than two weeks after a major global outage left around 8.5 million computers using Microsoft systems inaccessible, impacting healthcare and travel, after a flawed software update by cybersecurity firm CrowdStrike. While the initial trigger event was a Distributed Denial-of-Service (DDoS) attack… initial investigations suggest that an error in the implementation of our defences amplified the impact of the attack rather than mitigating it,” said an update on the website of the Microsoft Azure cloud computing platform.” The easiest way of explaining this is to compare Azure to a ball. A foot ball has (usually) 12 regular pentagons and 20 regular hexagons. They are stitched together. Now under normal conditions this is fine. However software is not any given shape, implying that a lot more stitches are required. Now consider that Microsoft 365 is used by over a million corporations. Now consider that a lot of them do not use the same configuration. This implies that we have thousand of differently stitched balls and the stitches is where it can go wrong. This is where we see the proverbial “the implementation of our defences amplified the impact of the attack rather than mitigating it” Microsoft has been so driven by using it all, that they merely advance the risk. And it doesn’t end here. CrowdStrike is another example. We see the news and the fake one person claiming responsibility for it. Yet the reality is that there is a lot more wrong than anyone is considering. These two events pretty much prove that Microsoft has policy and procedure flaws. It is easy to blame Microsoft, but the reality is that we see spin and the trust in Microsoft is pretty much gone. People say “Microsoft’s cloud revenue was 39.3% higher”, yes this is the case, and considering that Amazon was originally a ‘bookshop’, so they went against the larger techies like IBM and Microsoft and they got 31% of the global market share. Not bad for a bookshop. And the equation gets worse for Microsoft, these two events could cost them up to 10% market share. In which direction these 10% go is another matter. AWS is not alone here. 

I was serious about not letting Microsoft near my IP. I had hoped that Amazon would take it (they have the Amazon Luna) but it seems that Andy Jesse is not hungry for an additional 5 billion annually (in the first stage). 

And as Microsoft adds more and more to their arsenal these problems will become more frequent and inflicts damage on more of their customers. Do I have evidence? No, but it wasn’t hard and my example might give you the consideration to ponder where you could/should go next. 

2. Microsoft Earnings: Stock Tanks As AI Business Growth Worse Than Expected
In the second story we see (at https://www.forbes.com/sites/dereksaul/2024/07/30/microsoft-earnings-stock-tanks-as-ai-business-growth-worse-than-expected/) that Forbes is giving us “shares of Microsoft cratered about 7% following the earnings announcement, already nursing a more than 8% decline over the last three weeks” with the added “Microsoft’s crucial AI businesses was worse than expected, as its 29% growth in its Azure cloud computing unit fell short of projections of 31%, and sales in its AI-heavy intelligent cloud division was $28.5 billion, below estimates of $28.7 billion” As stated by me (as well as plenty of others) there is no AI. You see AI would give the program thinking skills, they do not have any. They kind of speculate and they have lots of scenario to give you the conditional feeling that they are talking “in your street” but that is not the case. For this simple illustration we get Wired (at https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/) giving us ‘Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies’, so how does this work? You see the program (LLM) looks at what ‘we’ search for, yet in this the setting is smudged by conspiracy theorists, troll farms and influencers. The first two push the models out of synch. Wired gives us “Research shared exclusively with WIRED shows that Copilot, Microsoft’s AI chatbot, often responds to questions about elections with lies and conspiracy theories.” Now consider that this is pushed onto all the other systems. Then we are treated to “Microsoft’s AI chatbot is responding with out-of-date or incorrect information”, so not only is the data wrong, it is out of date, as I see it what they call ‘training data’ is as I see it incorrect, out of data and unverified. How AI is that? A actual real AI is set on a Quantum computer (IBM has that, although in its infancy) a more robust version of shallow circuits (not sure if we are there yet) and is driven not by binary systems but framed on an Ypsilon particle system, which was proven by a Dutch physicist around 2020 (I forgot the name). This particle has another option. We currently have NULL, Zero and One. The Ypsilon particle has NULL, Zero, One and BOTH. A setting that changes everything.

But the implementation into servers is to be expected around 2037 (a speculation by me) then we get to the thinking programs and an actual AI. So when we see AI, we need to see that is a program that can course through data and give you the most likely outcome. I will admit that for a lot of people it will fit, but not for all and there we get the problem. You see Microsoft will blame all sources and all kind of people, but in the end it will be up to the programmer to show their algorithm is correct and as I am telling you now that it comes down to unverified data. How does that come over to you? 

When you consider that Wired also gave us “it listed numerous GOP candidates who have already pulled out of the race.” The issue of how out of date data is becomes clear. We see all these clever options that others give us, but when some LLM (labeled AI) is un-updated and unreliable, how secure remains your position when you base decision making streams on the wrong data? And that is merely a sales track. 

The last teaspoon is given to us by The Guardian. The Guardian (at https://www.theguardian.com/technology/2024/mar/06/microsoft-ai-explicit-image-safety) gave us on March 7th 2024:

3. Microsoft ignored safety problems with AI image generator, engineer complains
So when you consider the previous parts (especially CrowdStrike) “Shane Jones said he warned management about the lack of safeguards several times, but it didn’t result in any action” Microsoft will state that this is another issue. But I spoke about wrong data, out of date data and unverified data. And now we see that the lack of safeguards and inaction would make things worse and a lot faster than you think. You see as long as there is no real AI, all data needs to be verified and that does not seem to be the case in too many setting. I spoke about policy issues and procedural issues. Well here we get the gist “it didn’t result in any action” and we keep on seeing issues with Microsoft. So how many times will you face this? And that is before people realise that their IP are on Azure servers. So how many procedural flaws will your research we driven into until it is all on a Russian or Chinese or North Korean enabled server (most likely by Russia or China, which is a speculation by me).

As such, it was never rocket science, look at any corporation and in their divisions there will always be one person who thinks of number one (himself) and in that setting how safe are you? 

There is a reason that I do not want Microsoft near my IP. I can only hope that someone waked up and give me a nice retirement present ($30M post taxation would be nice).

Enjoy the day.

Leave a comment

Filed under IT, Science

Altering Image

This happens, sometimes it is within ones self that change is pushed, in other cases it is outside information or interference. In my case it is outside information. Now, let’s be clear. This is based on personal feelings, apart from the article not a lot is set in papers. But it is also in part my experience with data and thee is a hidden flaw. There is a lot of media that I do not trust and I have always been clear about that. So you might have issues with this article.

It all started when I saw yesterday’s article called ‘‘Risks posed by AI are real’: EU moves to beat the algorithms that ruin lives’ (at https://www.theguardian.com/technology/2022/aug/07/ai-eu-moves-to-beat-the-algorithms-that-ruin-lives). There we see: “David Heinemeier Hansson, a high-profile tech entrepreneur, lashed out at Apple’s newly launched credit card, calling it “sexist” for offering his wife a credit limit 20 times lower than his own.” In this my first question becomes ‘Based on what data?’ You see Apple is (in part) greed driven, as such if she has a credit history and a good credit score, she would get the same credit. But the article gives us nothing of that, it goes quickly towards “artificial intelligence – now widely used to make lending decisions – was to blame. “It does not matter what the intent of individual Apple reps are, it matters what THE ALGORITHM they’ve placed their complete faith in does. And what it does is discriminate. This is fucked up.”” You see, the very first issue is that AI does not (yet) exist. We might see all the people scream AI, but there is no such thing as AI, not yet. There is machine learning, there is deeper machine learning and they are AWESOME! But the algorithm is not AI, it is a human equation, made by people, supported by predictive analytics (another program in place) and that too is made by people. Lets be clear, this predictive analytics c an be as good as it is, but it relies on data it has access to. To give a simple example. In that same example in a place like Saudi Arabia, Scandinavians would be discriminated against as well, no matter what gender. The reason? The Saudi system will not have the data on Scandinavians compared to Saudi’s requesting the same options. It all requires data and that too is under scrutiny, especially in the era 1998-2015, too much data was missing on gender, race, religion and a few other matters. You might state that this is unfair, but remember, it comes from programs made by people addressing the needs of bosses in Fintech. So a lot will not add up ad whilst everyone screams AI, these bosses laugh, because there is no AI. And the sentence “While Apple and its underwriters Goldman Sachs were ultimately cleared by US regulators of violating fair lending rules last year, it rekindled a wider debate around AI use across public and private industries” does not help. What legal setting was in play? What was submitted to the court? What decided on “violating fair lending rules last year”? No one has any clear answers and they are not addressed in this article either. So when we get to “Part of the problem is that most AI models can only learn from historical data they have been fed, meaning they will learn which kind of customer has previously been lent to and which customers have been marked as unreliable. “There is a danger that they will be biased in terms of what a ‘good’ borrower looks like,” Kocianski said. “Notably, gender and ethnicity are often found to play a part in the AI’s decision-making processes based on the data it has been taught on: factors that are in no way relevant to a person’s ability to repay a loan.”” We have two defining problems. In the first, there is no AI. In the second “AI models can only learn from historical data they have been fed” I believe that there is a much bigger problem. There is a stage of predictive analytics, and there is a setting of (deeper) machine learning and they both need data, that part if correct, no data, no predictions. But how did I get there?

That is seen in the image above. I did not make it, I found it and it shows a lot more clearly what is in play. In most Fintech cases it is all about the Sage (funny moment). Predictive inference, Explanatory inference, and decision making. A lot of it is covered in machine learning, but it goes deeper. The black elements as well as control and manipulation (blue) are connected. You see an actual AI can combine predictive analytics and extrapolation, and do that for each category (races, gender, religion) all elements that make the setting, but data is still a part of that trajectory and until shallow circuits are more perfect than they are now (due to the Ypsilon particle I believe). You see a Dutch physicist found the Ypsilon particle (if I word this correctly) it changes our binary system into something more. These particles can be nought, zero, one or both and that setting is not ready, it allows the interactions to a much better process that will lead to an actual AI, when the IBM quantum systems get these two parts in order they become true quantum behemoth and they are on track, but it is a decade away. It does not hurt to set a larger AI setting sooner rather than too late, but at present it is founded on a lot of faulty assumptions. And it might be me, but look around on all these people throwing AI around. What is actual AI? And perhaps it is also me, the image I showed you is optionally inaccurate and lacks certain parts, I accept that, but it drives me insane when we see more and more AI talk whilst it does not exist. I saw one decent example “For example, to master a relatively simple computer game, which could take an average person 15 minutes to learn, AI systems need up to 924 hours. As for adaptability, if just one rule is altered, the AI system has to learn the entire game from scratch” this time is not learning, it is basically staging EVERY MOVE in that game, like learning chess, we learn the rules, the so called AI will learn all 10(111) and 10(123) positions (including illegal moves) in Chess. A computer can remember them all, but if one move was incorrectly programmed (like the night), the program needs to relearn all the moves from start. When the Ypsilon particle and shallow circuits are added the equation changes a lot. But that time is not now, not for at least a decade (speculated time). So in all this the AI gets blamed for predictive analytics and machine learning and that is where the problem starts, the equation was never correct or fair and the human element in all this is ‘ignored’ because we see the label AI, but the programmer is part of the problem and that is a larger setting than we realise. 

Merely my view on the setting.

 

Leave a comment

Filed under Finance, IT, Media, Science

Two weird moments

This just happened, the second weird moment can onto me, I got woken up from it as someone called me, but it is still shaking me. It started the night earlier, I do not know what set it iff and I did not realise why it happened, so I pushed it away, yet with what just happened, the previous event also plays and now I need to find the words.

Day 1
In day one, I faced some initiation, it was all about a mine-cart (like in Indiana Jones 2) and I was would be taken through a tunnel, pulled on one side, after the corridor, which was made with the use of pallets and smeared with clay and dirt, covered in some writing to the other side, it was about trusting the boss. Yet the boss was setting the stage as the person in the kart would be fed to a massively large snake (not an Anaconda), yet he believed that the snake was related to a snake god (yes, people are that crazy, just look at anti-vaxxers if you doubt me), so as I was unaware I went into the cart, the journey would be around 300 meters and there was a bend, but no track change was possible. As my journey started I saw the writing, the symbolism and I also seemingly saw the imagery change, and as the journey took me past the bend, the massive python like snake attacked and it took its non-poisonous teeth deep into my left shoulder. The pain was hard, but the fear of seeing the snake just over the left shoulder shook me to my core. I woke up and I had to change the sheets, they were drenched in sweat. It was only 6 degrees, but I was sweating like it was a 40 degree sunny day. I woke up and shrugged it away, but oddly enough my shoulder was still hurting this morning, so I actually had to take a pain killer. 

Day 2
Only hours away, it was time for another team building exercise. This time it was against 3 fellow employees and the track which we had to do wearing our Virtual Reality goggles, the rules were simple, never take off the goggles, it would be an automated fail and the winners, the two highest would be in line for management promotion. So as we started at the bottom we had to run up, we had to follow the path and the tunnels and stairs were where the normal stairs would have been, and over the track we were filmed. We saw the tracks change from down to up, to up to down and as we followed the course the land changed to meadow, fog filled meadow with lights. We were on a track that would take almost 30 minutes, and there I was, exhausted at position two. The person in front of me was on her knees and it was the last part, I looked over the ridge and the building was below me, close to 2000 feet below me, the note was clear “fall from here, but do NOT jump”, I had given up, I would rather be dead than lose and I rolled over the ledge falling to my death, I no longer cared and that is when I felt a rush and a slowing fall, it was the virtual reality, I fell into a net from 4 stories high, not thousands of feet, I saw the boss who was walking up to me and then the phone rang. That was it, or was it?

It is a little later and my mind is working things out, you see, Augmented reality and Virtual Reality can dupe the mind, as long as it can acclimatise to the new settings it can be fooled and it can be done so a lot easier then when you are alert in the normal world. So what happens when this becomes an interrogation and torture device? You see, we tend to fear the extremists and their suicide approach. But in Virtual Reality they are a lot more easily pliable. Their conservative values can fall under VR faster than in the normal world, a lot faster and I think that my mind is telling me that this could optionally make for a nice movie. Consider movies like Truth or Dare, and Nerve. We have similar settings where we entice the audience to accept hat is there, yet in VR it is all fake and the mind cannot completely deal with it and as long as no real boundaries are broken, the mind adjusts. So what happens when that becomes a case, it is seemingly small but it is in the core of us and there the small change flips an entirely new track, one we have never seen before and the brain changes from decider to spectator and there the intelligence required is up for the taking. Now 2-3 years ago it would be some sloppy wannabe kiss, yet with the evolutions in VR, Quantum computers (IBM) and deeper learning it becomes a new ball game. We can get the suicide bomber in a stage where he feels to press the button, but it is an augmented VR button, and after that whatever he sees is fake, but in that stage he will divulge EVERYTHING, he accomplished his goals. And now we get the rundown on what we needed to know and this has the option to be one hell of a rollercoaster movie. Even in my sleep my creativity continues and now that this is written, I can look at some information that ABC has for us all, it is all about doubters, but that does not matter.

Leave a comment

Filed under IT, Science

Insomnia non habit legem

Yup, could not sleep and it is 03:00. So what happens? My mind thinks up a new game, actually I came up with two games. One came to me via Ryan Reynolds (bloody bastard). I was watching his Gaelic Wrexham advertisement and things started to click, it is a game that is based on two games, two existing games mind you.

Consider Draughts (Checkers on a chess board) and Chess, but both playing on the same board at the same time (hence a digital game would be essential). Chess remains the same, all the pieces move in the same way, no difference. It is the draught game that alters. Consider you are playing checkers, you hit an opponents piece. There is a difference now, the move remains the same, but if a chess piece of the SAME colour as the checker piece, then the piece is NOT removed. It opens up a new way of strategic thinking. In opposition, it forces you to place your chess pieces in a different stratagem. Do you support your draught pieces or forfeit the location? I wonder if the game could be playable in that way and when you ‘king’ a checkers piece, the setting becomes more complex and in that fact, hitting pieces that are protected might set you up for the fall, you might end up losing your ‘king’ a lot faster. 

The second game is based also on an existing game. The original was a game on the CBM-64, it was called something like kinetic puzzles. It was a puzzle of a videoclip. So the image of the puzzle would always be in motion, as such the puzzle was more challenging. I liked that game and until today I had pretty much forgotten about it. Yet my mind wanted something more and now we go off the deep end.

As I was contemplating stories (some time ago) I came up with a quantum puzzle, the stage was a little bit like an episode of Fringe. We see a room and a person appears out of synch in a room moving irregular all over the place, like slices of a videoclip. Yet if you analyse the images, you will see a different timeline, something that shows (read: indicates) what the sequence of the motion is and when we see the image in time and side by side the image shows the image, or the person to represent a location, now if we see that same person in that location, the things we see will seem to make sense, the are all connected in some way (the way is the final part of the puzzle). Yet there is the crunch, we would need Google Maps to be able to translate the initial number (like a 14 digit map reference) to represent a location, any location in the world and that gives us the puzzle challenge, to set a puzzle not to a 2D image, but a 3D location and that place becomes the actual puzzle. I am still working on a few angles, but that I what my mind came up with. New ways to invoke a different way of viewing things. We forgot to take the stage and change the stage of application and distribution giving us a new way to solve things. I see it as an essential step in the evolution of our minds, if we do not, we are lost, we need to push forward and offer more to our brain, it can do so much more and if we get tuck in the setting of reinventing the wheel, we remain mere wheel dealers. I think it is time to tell the box that it has become too much of a limitation. 

It reminds me of a thought I had, or was told hen I was young (like half a century ago). The shadow one one dimension is the representation of the previous dimension. So the shadow of a 2D object is a line, the shadow of a 3D object is a 2D object and so on, so in that light, how do we see the shadow of a 5D object? Perhaps that view is too limiting to use but it we are to reflect pace as a shadow, what will we get? Computers can give us that represented image and as such we can use them to evolve our mind. I know it is far reaching, and perhaps it is over reaching as well, yet I believe that if we overreach we might be able to see what is just beyond our reach. Am I nuts? Perhaps I am, but the creative mind seeks an outlet, through gaming, through books, through art, through stories and in that instant we might touch on something we were not able to touch before. If reengineering is merely the setting to redesign something, it is not always to adapt to wider application, sometimes it is to start a new direction of what was never contemplated before. In my mind what does a game, a nuclear meltdown and a movie have in common? They are merely all the contemplation of stories, the question becomes, which of these stories can become a reality? More important, should they become a reality? It was Spielberg in Jurassic Park who gave us the question of whether we can versus whether we should attempt something. In the business world the only limitation is profit, cash is king, money is all. Yet we seemingly forget that we should or should not might not be a question of profit, but a setting of ethics. In that same setting I reused an image of a report yesterday that states that 50% of all pollution comes from 147 facilities in the world, the EU reported on it and the media remains seemingly blind. Some blame the rich and their jets, yet I did not find any newspaper or media piece that takes a long hard look at these 147 facilities, why is that? Is it too much about profit? It links because if we can learn to think differently, in different path and multiple stages, perhaps something could be done about these 147 facilities, it is merely a thought. 

If IBM completes its quantum computer to a degree we need it to be, we will need practical applications in quantum settings and at present there is a workforce of ZERO that can get us there, as such we need a next generation that thinks differently thinks on different levels and what I stated in the 80’s now applies. Gaming gets us there, it took some 30 yeas to get to that level of thinking. If we do not prepare the next generation, the ones that do will end up ruling all others. If you doubt that consider the 5G stage where America is blindly accusing and not providing evidence, they are losing the race and they are scared. So what happens when Asia and Europe rule the Quantum computing realm? As I see it the US and its Trumpism is setting itself up for a rather large fall and if he gets enough votes the economy will change, it will change by a lot and in that, should the 5G and Quantum computing fall outside of the US workforce, it will be game over for them. So they better learn that new shapes of games need to be taught to the next generation it is all we might have left. And yes, this sounds negative, but wonder for yourself if more of the same will solve whatever you see is wrong around you, or does it require a different form of thinking?

Leave a comment

Filed under Gaming, IT, Politics, Science

Is it real?

Yes, that is the question we all ask at times, in my case it is something my mind is working out, or at least trying to work out. The idea that my mind is forming is “Is it the image of a vision, or is it a vision of an image”, one is highly useful, the other a little less so. The mind is using all kinds of ideas to collaborate in this, as such, I wonder what is. The first is a jigsaw, consider a jigsaw, even as the image is different, the pieces are often less so different, one could argue that hundreds of jigsaws have interchangeable pieces, we merely do not consider them as the image is different and for the most, how many jigsaws have you ever owned? With this in the back of the mind what happens when we have data snippets, a data template, with several connectors, the specific id of the data and then we have the connector which indicates where the data comes from, both with date and time stamps. But like any jigsaw, what if we have hundreds of jigsaws and the pieces are interchangeable? What is the data system is a loom that holds all the data, but the loom reflects on the image of the tapestry, what happens, when we see all the looms, all the tapestries and we identify the fibres as the individual users? What happens when we create new tapestries that are founded on the users? We think it is meaning less and useless, but is it? What if data centres have the ability to make new frameworks, to stage a setting that identifies the user and their actions? We talk about doing this, we claim to make such efforts, but are we? You see, as IBM completed its first Quantum computer, and it has now a grasp on shallow circuits, the stage comes closer to having Ann actual AI in play, not the one that IT marketing claims to have, and salespeople states is in play, but an actual AI that can look into the matter, as this comes into play we will need a new foundation of data and a new setting to store and retrieve data, everything that is now is done for the convenience of revenue, a hierarchic system decades old, even if the carriers of such systems are in denial, the thinking requires us to thwart their silliness and think of the data of tomorrow, because the data of today will not suffice, no matter how blue Microsoft Italy claims it is, it just won’t do, we need tomorrows thinking cap on and we need to start considering that an actual new data system requires us to go back to square one and throw out all we have, it is the only way.

In this, we need to see data as blood cells, billions individual snippets of data, with a shell, connectors and a core. All that data in veins (computers) and it needs to be able to move from place to place. To be used by the body where the specific need is, an if bioteq goes to places we have not considered, data will move too and for now the systems are not ready, they are nowhere near ready and as such my mind was spinning in silence as it is considering a new data setup. A stage we will all need to address in the next 3-5 years, and if the energy stage evolves we need to set a different path on a few levels and there we will need a new data setup as well, it is merely part of a larger system and data is at the centre of that, as such if we want smaller systems, some might listen to Microsoft and their blue (Azure) system, but a smurf like that will only serve what Microsoft wants it to smurf, we need to look beyond that, beyond what makers consider of use, and consider what the user actually needs.

Consider an app, a really useful app when you are in real estate, there is Trulia, it is great for all the right reasons, but it made connections, as it has. So what happens when the user of this app wants another view around the apartment or house that is not defined by Yelp? What happens when we want another voice? For now we need to take a collection of steps hoping that it will show results, but in the new setting with the new snippets, there is a larger option to see a loom of connections in that location, around that place we investigate and more important, there is a lot more that Trulia envisioned, why? Because it was not their mission statement to look at sports bars, grocery stores and so on, they rely on the Yelp link and some want a local link, some want the local link that the local newspapers give. That level of freedom requires a new thinking of data, it requires a completely new form of data model and in 5G and later in 6G it will be everything, because in 4G it was ‘Wherever I am’, in 5G it will become ‘Whenever I want it, and the user always wants it now. In that place some blue data system by laundry detergent Soft with Micro just does not cut it. It needs actual nextgen data and such a system is not here yet. So if I speculate on 6G (pure speculation mind you), it will become ‘However I need it’ and when you consider that, the data systems of today and those claiming it has the data system of tomorrow, they are nowhere near ready, and that is fine. It is not their fault (optionally we can blame their board of directors), but we are looking at a new edge of technology and that is not always a clear stage, as such my mind was mulling a few things over and this is the initial setting my mind is looking at. 

So, as such we need to think what we actually need in 5 years, because if the apps we create are our future, the need to ponder what data we embrace matters whether we have any future at all.

Well, have a great easter and plenty of chocolate eggs.

Leave a comment

Filed under IT, Science

News, fake news, or else?

Yup that is the statement that I am going for today. You see, at times we cannot tell one form the other, and the news is making it happen. OK, that seems rough but it is not, and in this particular case it is not an attack on the news or the media, as I see it they are suckered into this false sense of security, mainly because the tech hype creators are prat of the problem. As I personally see it, this came to light when I saw the BBC article ‘Facebook’s Instagram ‘failed self-harm responsibilities’’, the article (at https://www.bbc.com/news/technology-55004693) was released 9 hours ago and my blinkers went red when I noticed “This warning preceded distressing images that Facebook’s AI tools did not catch”, you see, there is no AI, it is a hype, a ruse a figment of greedy industrialists and to give you more than merely my point of view, let me introduce you to ‘AI Doesn’t Actually Exist Yet’ (at https://blogs.scientificamerican.com/observations/ai-doesnt-actually-exist-yet/). Here we see some parts written by Max Simkoff and Andy Mahdavi. Here we see “They highlight a problem facing any discussion about AI: Few people agree on what it is. Working in this space, we believe all such discussions are premature. In fact, artificial intelligence for business doesn’t really exist yet”, they also go with a paraphrased version of Mark Twain “reports of AI’s birth have been greatly exaggerated, I gave my version in a few blogs before, the need for shallow circuits, the need for a powerful quantum computer, IBM have a few in development and they are far, but they are not there yet and that is merely the top of the cream, the icing on the cake. Yet these two give the goods in a more eloquent way than I ever did “Organisations are using processes that have existed for decades but have been carried out by people in longhand (such as entering information into books) or in spreadsheets. Now these same processes are being translated into code for machines to do. The machines are like player pianos, mindlessly executing actions they don’t understand”, and that is the crux, understanding and comprehension, it is required in an AI, that level of computing will not now exist, not for at least a decade. Then they give us “Some businesses today are using machine learning, though just a few. It involves a set of computational techniques that have come of age since the 2000s. With these tools, machines figure out how to improve their own results over time”, it is part of the AI, but merely part, and it seems that the wielders of the AI term are unwilling to learn, possibly because they can charge more, a setting we have never seen before, right? And after that we get “AI determines an optimal solution to a problem by using intelligence similar to that of a human being. In addition to looking for trends in data, it also takes in and combines information from other sources to come up with a logical answer”, which as I see is not wrong, but not entirely correct either (from my personal point of view), I see “an AI has the ability to correctly analyse, combine and weigh information, coming up with a logical or pragmatic solution towards the question asked”, this is important, the question asked is the larger problem, the human mind has this auto assumption mode, a computer does not, there is the old joke that an AI cannot weigh data as he does not own a scale. You think it is funny and it is, but it is the foundation of the issue. The fun part is that we saw this application by Stanley Kubrick in his version of Arthur C Clarke’s 2001: A Space Odyssey. It is the conflicting part that HAL-9000 had received, the crew was unaware of a larger stage of the process and when the stage of “resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission”, which has the unfortunate part that Astronaut Poole goes the way of the Dodo. It matters because there are levels of data that we have yet to categorise and in this the AI becomes as useful as a shovel at sea. This coincides with my hero the Cheshire Cat ‘When is a billy club like a mallet?’, the AI cannot fathom it because he does not know the Cheshire Cat, the thoughts of Lewis Carrol and the less said to the AI about Alice Kingsleigh the better, yet that also gives us the part we need to see, dimensionality, weighing data from different sources and knowing the multi usage of a specific tool.

You see a tradie knows that a monkey wrench is optionally also useful as a hammer, an AI will not comprehend this, because the data is unlikely to be there, the AI programmer is lacking knowledge and skills and the optional metrics and size of the monkey wrench are missing. All elements that a true AI can adapt to, it can weight data, it can surmise additional data and it can aggregate and dimensionalise data, automation cannot and when you see this little side quest you start to consider “I don’t think the social media companies set up their platforms to be purveyors of dangerous, harmful content but we know that they are and so there’s a responsibility at that level for the tech companies to do what they can to make sure their platforms are as safe as is possible”, as I see it, this is only part of the problem, the larger issue is that there are no actions against the poster of the materials, that is where politics fall short. This is not about freedom of speech and freedom of expression. This is a stage where (optionally with intent) people are placed in danger and the law is falling short (and has been falling short for well over a decade), until that is resolved people like Molly Russell will just have to die. If that offends you? Good! Perhaps that makes you ready to start holding the right transgressors to account. Places like Facebook might not be innocent, yet they are not the real guilty parties here, are they? Tech companies can only do so such and that failing has been seen by plenty for a long time, so why is Molly Russel dead? Yet finding the posters of this material and making sure that they are publicly put to shame is a larger need, their mommy and daddy can cry ‘foul play’ all they like, but the other parents are still left with the grief of losing Molly. I think it is time we do something actual about it and stop wasting time blaming automation for something it is not. It is not an AI, automation is a useful tool, no one denies this, but it is not some life altering reality, it really is not.

Leave a comment

Filed under IT, Law, Media, Politics, Science