Tag Archives: AI

It was never rocket science

Yup, that is the gist of it. And it seems that people are starting to wake up. You see the biggest issue I have had with any mention of AI, is that it doesn’t (yet) exist. People can shout AI on every corner, but soon the realisation comes in that they were wrong all the time will hurt them, it will hurt them badly. And this is merely a sideline to the issue. The issue is Microsoft and lets get through some articles.

1. Microsoft says cyber-attack triggered latest outage
The first one is (at https://www.bbc.com/news/articles/c903e793w74o) where we see “It comes less than two weeks after a major global outage left around 8.5 million computers using Microsoft systems inaccessible, impacting healthcare and travel, after a flawed software update by cybersecurity firm CrowdStrike. While the initial trigger event was a Distributed Denial-of-Service (DDoS) attack… initial investigations suggest that an error in the implementation of our defences amplified the impact of the attack rather than mitigating it,” said an update on the website of the Microsoft Azure cloud computing platform.” The easiest way of explaining this is to compare Azure to a ball. A foot ball has (usually) 12 regular pentagons and 20 regular hexagons. They are stitched together. Now under normal conditions this is fine. However software is not any given shape, implying that a lot more stitches are required. Now consider that Microsoft 365 is used by over a million corporations. Now consider that a lot of them do not use the same configuration. This implies that we have thousand of differently stitched balls and the stitches is where it can go wrong. This is where we see the proverbial “the implementation of our defences amplified the impact of the attack rather than mitigating it” Microsoft has been so driven by using it all, that they merely advance the risk. And it doesn’t end here. CrowdStrike is another example. We see the news and the fake one person claiming responsibility for it. Yet the reality is that there is a lot more wrong than anyone is considering. These two events pretty much prove that Microsoft has policy and procedure flaws. It is easy to blame Microsoft, but the reality is that we see spin and the trust in Microsoft is pretty much gone. People say “Microsoft’s cloud revenue was 39.3% higher”, yes this is the case, and considering that Amazon was originally a ‘bookshop’, so they went against the larger techies like IBM and Microsoft and they got 31% of the global market share. Not bad for a bookshop. And the equation gets worse for Microsoft, these two events could cost them up to 10% market share. In which direction these 10% go is another matter. AWS is not alone here. 

I was serious about not letting Microsoft near my IP. I had hoped that Amazon would take it (they have the Amazon Luna) but it seems that Andy Jesse is not hungry for an additional 5 billion annually (in the first stage). 

And as Microsoft adds more and more to their arsenal these problems will become more frequent and inflicts damage on more of their customers. Do I have evidence? No, but it wasn’t hard and my example might give you the consideration to ponder where you could/should go next. 

2. Microsoft Earnings: Stock Tanks As AI Business Growth Worse Than Expected
In the second story we see (at https://www.forbes.com/sites/dereksaul/2024/07/30/microsoft-earnings-stock-tanks-as-ai-business-growth-worse-than-expected/) that Forbes is giving us “shares of Microsoft cratered about 7% following the earnings announcement, already nursing a more than 8% decline over the last three weeks” with the added “Microsoft’s crucial AI businesses was worse than expected, as its 29% growth in its Azure cloud computing unit fell short of projections of 31%, and sales in its AI-heavy intelligent cloud division was $28.5 billion, below estimates of $28.7 billion” As stated by me (as well as plenty of others) there is no AI. You see AI would give the program thinking skills, they do not have any. They kind of speculate and they have lots of scenario to give you the conditional feeling that they are talking “in your street” but that is not the case. For this simple illustration we get Wired (at https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/) giving us ‘Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies’, so how does this work? You see the program (LLM) looks at what ‘we’ search for, yet in this the setting is smudged by conspiracy theorists, troll farms and influencers. The first two push the models out of synch. Wired gives us “Research shared exclusively with WIRED shows that Copilot, Microsoft’s AI chatbot, often responds to questions about elections with lies and conspiracy theories.” Now consider that this is pushed onto all the other systems. Then we are treated to “Microsoft’s AI chatbot is responding with out-of-date or incorrect information”, so not only is the data wrong, it is out of date, as I see it what they call ‘training data’ is as I see it incorrect, out of data and unverified. How AI is that? A actual real AI is set on a Quantum computer (IBM has that, although in its infancy) a more robust version of shallow circuits (not sure if we are there yet) and is driven not by binary systems but framed on an Ypsilon particle system, which was proven by a Dutch physicist around 2020 (I forgot the name). This particle has another option. We currently have NULL, Zero and One. The Ypsilon particle has NULL, Zero, One and BOTH. A setting that changes everything.

But the implementation into servers is to be expected around 2037 (a speculation by me) then we get to the thinking programs and an actual AI. So when we see AI, we need to see that is a program that can course through data and give you the most likely outcome. I will admit that for a lot of people it will fit, but not for all and there we get the problem. You see Microsoft will blame all sources and all kind of people, but in the end it will be up to the programmer to show their algorithm is correct and as I am telling you now that it comes down to unverified data. How does that come over to you? 

When you consider that Wired also gave us “it listed numerous GOP candidates who have already pulled out of the race.” The issue of how out of date data is becomes clear. We see all these clever options that others give us, but when some LLM (labeled AI) is un-updated and unreliable, how secure remains your position when you base decision making streams on the wrong data? And that is merely a sales track. 

The last teaspoon is given to us by The Guardian. The Guardian (at https://www.theguardian.com/technology/2024/mar/06/microsoft-ai-explicit-image-safety) gave us on March 7th 2024:

3. Microsoft ignored safety problems with AI image generator, engineer complains
So when you consider the previous parts (especially CrowdStrike) “Shane Jones said he warned management about the lack of safeguards several times, but it didn’t result in any action” Microsoft will state that this is another issue. But I spoke about wrong data, out of date data and unverified data. And now we see that the lack of safeguards and inaction would make things worse and a lot faster than you think. You see as long as there is no real AI, all data needs to be verified and that does not seem to be the case in too many setting. I spoke about policy issues and procedural issues. Well here we get the gist “it didn’t result in any action” and we keep on seeing issues with Microsoft. So how many times will you face this? And that is before people realise that their IP are on Azure servers. So how many procedural flaws will your research we driven into until it is all on a Russian or Chinese or North Korean enabled server (most likely by Russia or China, which is a speculation by me).

As such, it was never rocket science, look at any corporation and in their divisions there will always be one person who thinks of number one (himself) and in that setting how safe are you? 

There is a reason that I do not want Microsoft near my IP. I can only hope that someone waked up and give me a nice retirement present ($30M post taxation would be nice).

Enjoy the day.

Leave a comment

Filed under IT, Science

Changing the game

There was a setting that was designed with the recently departed Google Stadia and the Amazon Luna in mind. I set the premise to 50 million systems in phase one and up to 200 million in phase two in mind. Alas Amazon wasn’t attracted to such a sales venue. Last night I pondered a few items and I occurred to me that the Apple Vision Pro was equally set to that premise. There is a limitation, they would have to be able to run Unreal Engine 5 environments. When that is possible the rest would auto fill in, the other parts would not need UE5. Take that and like it to the Apple Arcade and they would make Microsoft irrelevant within a year, optionally to years. It is the setting that will show the other players (like Kingdom Holding) that they lost out. When this setting goes to apple, they can define a new niche customer base. Apple Arcade matter because not everyone can afford the Vision Pro. Even if a cheaper version comes to market close to 75 million people would be left in the cold. And I reckon that Apple wants the entire cluster of people. The fact that you get an arcade setting that could be upgraded to Vision Pro almost sells itself. And my predictions were conservative. 200 million is a little over 10% of the entire cluster with Indonesia, Bangladesh and Egypt leading the way. Places were Apple have great growth potential. That and a largely untapped advertisement potential as well. In the end It is a market that will end Microsoft, it gaming and their edge population (the little they had in the first place). I have been going over the numbers in the first place and I can see no downfall here. 

Apple’s first task is to set the Vision Pro to deal with Unreal Engine 5, it is the cornerstone of success, or at least it will be. In the end Apple will have to open (or enhance) a data cloud in Saudi Arabia with later on added clusters in Indonesia and Egypt. But I reckon that when they pass 100 million added people it would be a trivial expenditure. And if they surpass the 10% group (which requires data insight that I cannot lay my fingers on) the entire setting will cost Microsoft and Facebook revenue that they currently think is ‘safe’. But they didn’t count on a wildcard and it was lost because they never looked behind them. Their was billions in revenue and it was left on the floor. I wonder if Apple ever considered that. Apple has no blame, their mission statement was based on their niche market. But technology and requirements changed. With Brics it changes even more. Now they have Tencent Technology to content with. Tencent might not have the Vision Pro, but my system was initially designed without it. The Vision Pro has as  see it a larger benefit, but it is a mere ‘nice to have’. You see, sales engineering has a three tiered awareness approach. It is set to ‘must tell everyone’, ‘nice to have’ and the rest. When you focus on the first line, most people tend to ignore the ‘nice to have’ but it is there that the setting gives people outside the designated clusters are found. So don’t set to the wealthy, just make sure that they see the upside, and Vision Pro would do that. It sets the premise of a solution from 5 billion in phase one up to 18 billion in phase two and that will not include advertisement money over a dozen countries. I reckon that this is more than I can imagine (because this has not been done before) and several parts were found be looking behind me, something the current captains of technology industry aren’t doing. They are all looking forward, to the mystical AI (which does not exist). I decided to look at what was forgotten and tinkered it into a new mould. This implies innovation patents and all that is outside of the AR and printable displays (see other stories on this blog). All that and more are a future stage for the implementor of this solution, which was exactly why I got to Kingdom holding. On the far end of that, there was the real estate upgrade I considered. In light of what I noticed around Dubai. A side not considered, because all these web solutions couldn’t think out of their pond. But water is here it is and as such they didn’t consider it and it is here were I saw a side that could elevate Tencent and Huawei to a larger profit margin, not just for Dubai, but a global solution that allow real estates on a global setting to elevate their business to unfold. Dubai makes it clear. Yet it will not stop there. As the song goes New York, London, Paris, Munich they will all see the benefit and after that all metropolitan areas will follow suit. So do you think I was kidding when I said that Google et al fumbled the ball here? They ignored billions in revenue and they are all chasing a false AI dream. In a few years they will realise that a hype is merely a path to awareness and not towards revenue. Revenue needs to be real and achievable. For that we get “fake and deeply flawed Artificial Intelligence (AI) is rampant”, a quote by Frederike Kaltheuner based on works of over 20 writers. You see what the people regard s AI is merely to sides if it. LLM (Large Language Models) and DML (Deeper Machine Learning), both powerful and both opening all kinds of doors, but it is not AI, or Real AI as they now call it. Like other awareness hypes created, it isn’t real and in the mean time I created the idea for something real that could the right party give up to 18 billion a year. So when did these parts hit you, does it make sense that Google and Amazon lay off around 35,000 jobs? I will let you decide on that. In the mean time I will place more IP online so that it can only continue as Freeware. The Public Domain will show the rest on what they all missed out on. It might give me some cash, it might not. But I Will get the last laugh. I will have kept it out of the hands of Microsoft.

Have a great Thursday.

Leave a comment

Filed under IT, Law

Absolute Insanity

This all started a few days ago and I had to mull a few things over. You see AI does not exist, no matter how strong the hype and the presentations are. Now we see also the term ‘spatial AI’, another joust towards hype and revenue grabbing (the easy way). There are a few issues with all this. You see machine learning and deeper machine learning are great, they are awesome. In addition the growth of Large language models (LLM) are adding to the mixture but here is the snafu (situation normal, all fucked up). It is all still in the hands of programmers and verifiers. The issue of human error comes into play.

So when we realise this The BBC Article (at https://www.bbc.com/news/articles/c977elr6veno) called ‘Airline to ‘better manage’ flights with AI use’ should get some investors worried. The start is seen with “The use of artificial intelligence (AI) at easyJet’s new control centre has allowed its operations teams to better manage flights, the airline said.” It reminds me of an old setting in the 90’s when someone produced a program called Goldmine. Don’t get me wrong it was a good program but it relied on standardisation. That means that exceptions aren’t dealt with. The programmers never anticipated the exception thy were given, so alternative fields were used and in AI the use of alternative solutions tend to be devastating on data models. So when we see “More than 250 staff work in the control centre, managing easyJet’s daily programme of about 2,000 flights.” We might see the initial problem. Last minute changes (pilot gets food poisoning) or perhaps the flight attendant got stuck in location X. It does not matter what the issues are, things will go pear shaped. And that is before they are confronted with the ‘oversight’ of the programmer. 

Now there is the recognition that a system like this can reduce stress on these 250 staff members, but it will need human verification and that is not what an AI system needs (if it existed). In the end I reckon that investors will see in 6-12 months that operating costs have exploded. I reckon that Johan Lundgren talks a good talk, and there are benefits to Deeper Machine Learning and it will help any corporation but the missing part in this are the programmers. You see these solutions aren’t AI, they required a programmer and that programmer makes mistakes. It might be simple, it might be complex and when that is found it tends to be in the most inconvenient way possible. 

Interesting that the BBC didn’t see this part. It would have been the first step I would take. Which firm was involved in this system? How many programmers? What previous assignments did they have? I reckon that the investors might have some questions on all this and I hope for Johan Lundgren that he has answers.

Leave a comment

Filed under IT, Science

The tables are starting to turn

This is a setting I always saw coming.It wasn’t magic or predestination, it was simple presumption. Presumption is speculation based on evidence, on facts. The BBC puts out a near perfect article (at https://www.bbc.co.uk/news/technology-67986611) where we see ‘What happens when you think AI is lying about you?’ There are several brilliant sides to it, as such it is best to read it for yourself. But I will use a few parts of it because there is a larger playing field in consideration. The first to realise is that AI does not exist, not yet. 

As such when we see ““Illegal content… means that the content must amount to a criminal offence, so it doesn’t cover civil wrongs like defamation. A person would have to follow civil procedures to take action,” it said. Essentially, I would need a lawyer. There are a handful of ongoing legal cases round the world, but no precedent as yet.

This is actually a much larger setting then people realise. You see “AI algorithms are only as objective as the data they are trained on, and if that data is biased or incomplete, the algorithm will reflect those biases” Yet the larger truth is that AI does not exist, it is Machine Learning or better, as such it took a programmer, a programmer implies corporate liability. That is what corporations fear, that is why everything is as muddled as possible. I reckon that Google, Microsoft and all others making AI claims are fearing. You see when you consider “The second told me I was in “unchartered territory” in England and Wales. She confirmed that what had happened to me could be considered defamation, because I was identifiable and the list had been published. But she also said the onus would be on me to prove the content was harmful. I’d have to demonstrate that being a journalist accused of spreading misinformation was bad news for me.” I believe it is a little less simple than that. You see algorithm implies programming, as such the victim has a right to demand the algorithm be put out in court for scrutiny. The lines that resulted in defamation should be open to scrutiny and that is what big-tech fears at present, because AI does not exist. It is all based on collected data and that data should be verified by the legal team of the victim and that stops everything for the revenue hungry corporations. 

In addition I would like to add an article, also by the BBC (at https://www.bbc.co.uk/news/technology-68025677) called ‘DPD error caused chatbot to swear at customer’. It clearly implies that a programmer was involved. If language skills involve swearing, who put the swear words there? When did your youngest one start to swear? They all do at some point. So what triggered this? Now consider that machine learning requires data, so where is that swear data coming from? Who inclined or instituted that to be used? So when you see ““An error occurred after a system update yesterday. The AI element was immediately disabled and is currently being updated.” Before the change could be made, however, word of the mix-up spread across social media after being spotted by a customer. One particular post was viewed 800,000 times in 24 hours, as people gleefully shared the latest botched attempt by a company to incorporate AI into its business.” Consider that AI does not exist, consider that swear words are somehow part of that library, then consider that a programmer made a booboo (this is always allowed to happen) and they are ‘updating’ this. A system is being updated to use a word library. Now consider the two separate events as one and see how much danger the revenue hungry corporations have placed themselves in. When you go by ‘Trust but verify’ we can make all kinds of assumptions, but data is the centre of that core with two circles forming a Venn diagram. One circle is data, the other is programming. Now watch how big-tech is worried, because when this goes wrong, it goes wrong in a big way and they would be accountable for billions in pay outs. It will not be a small amount and it will be almost everywhere. The one case of a defamed journalist is one and in this day and age not the smallest setting. The second is that these systems will address customers. Some will take offence and some will take these companies to court. So how much funds did they think that they could safe with these systems? All to save on a dozen employees? A setting that will decide the fate of a lot of companies and that is what some fear. Until the media and several other dodo’s start realising that AI doesn’t yet exist. At that point the court cases will explode. It will be about a firm, their programmer and the wrong implementation of data. I reckon that within 2-3 years there will be an explosion of defamation cases all over the world. The places relying on Common Law will probably be getting more and sooner than Civil Law nations, but they will both face a harsh reality. It is all gravy whilst the revenue hungry sales people are involved. When the court cases come shining through those firms will have to face harsh internal actions. That is speculation on my side, but based on the data I see at present it seems like a clear case of  precise presumption which is what the BBC in part is showing us, no matter how courts aren’t ready. In torts there are cases and this is a setting staged on programmers and data, no mystery there and that could cost those hiding behind AI are facing. It is merely my point of view, but I feel that I am closer to the truth than many others evangelising whatever they call AI.

Enjoy the weekend.

Leave a comment

Filed under Finance, IT, Law, Science

The other colour

That is what is ended up being. It started with the thoughts of ‘Pink is the colour of ignorance’, a story that might still make it, but I want to add more evidence. The Guardian had a good start, but it is more than that and I need to tag it. The other colour is green, the colour of dollars. Reuters give us some parts of it, but my mind is asking questions. Questions aren’t voiced by the media at present. As such we start with Reuters (at https://www.reuters.com/technology/google-invest-1-billion-uk-data-centre-2024-01-18/) where we are given ‘Google to invest $1 billion in UK data centre’ and this comes with the added text “It also comes weeks after Microsoft unveiled plans to pump 2.5 billion pounds ($3.2 billion) into Britain over three years, including in growing its data centre capacity, to underpin future AI services.” The math doesn’t work, especially now. You see UK pushed away from the EU and all this sets a weird station. I know that any data centre costs money and I have no idea how much. One argument is that a data center of the size that Facebook or Google might use would cost from $250 million to $500 million, so why is Google spending twice that and why is Microsoft spending 250% more than that? Now the twice I could get. Operational cost, rising energy costs and when you add that up you might get to 750 million and that is only 250 million away from the leap that Google is stating. 

Sp when you look at that setting we see two bulls fighting for the same population (Google and Microstupid) but the larger question becomes is why? Why spend that much to cater to 68 million people in the United Kingdom. It is not just services, it is data and data collection. To what degree is anyones guess, but wonder why Microsoft would spend $2,500,000,000 to service 68 million people. I am wondering who is buttering the sandwich of whom. I tend to distrust Microsoft, there have been too many issues and they have lost too many battles. Is this desperation? 

The open field
The questions in the open field is not the UK, you see if these two are there they are already growing in the Middle East or they are about to. You see, these investments make sense in the UAE with 9.5 million and Saudi Arabia with 36 million. Apart from their populations both these players will have exploding infrastructure needs (The Kingdom of Saudi Arabia more than the UAE), but the UAE is on a steep incline of services and services needs and I showed that in a few articles last week. The UK has none of that at present. More importantly the EU has also needs but not to these degrees and the UK facilities will have projected limitations as one might guess. So what gives? As for Future AI services. AI does not yet exist and the Machine Learning solutions are all massively dependent on data, something Microsoft is still short of. As I ponder more sides to this, I see more issues and also Huawei now has a data center in Abu Dhabi, giving them a much larger advantage in a place where cash is still king, or better stated cash has a more robust voice, more than the UK can muster at any given time before January 2026. 

As such there are issues and even as none of this is on Reuters (important to know), the setting is that the lack of visibility in several directions make me wonder where these two are going. No matter how good we think of Google (I still do) they both need data, Google to remain top dog and Microsoft to not be as irrelevant as they made themselves to be. 

Sides no one is looking at and I merely wonder why. Are they in a flim flam spin by Microsoft marketing? Do too many believe the shallowness of Microsoft presentations? Your guess is as good as mine, but when you start digging into actual sources that remain true non-biassed the math does not add up. At least for me it does not and I am not economist or econometrical engineer. Data is its own currency, the problem is that when it is the only currency remaining those who have it get access to everything, the rest do not.

Just a thought, enjoy your Friday.

Leave a comment

Filed under Finance, IT, Science

Is it more than buggy?

Very early this morning I noticed something. Apple had made a booboo, now this isn’t a massive booboo and many will hide behind the ‘glitch’ sentiment. But this happened just as I was reading some reports on AI (what they perceive to be AI) and things started to click into place. You see AI (as I have said several times before) does not yet exist. We are short on several parts and yes machine learning and deeper machine learning exist and they are awesome. But there is a extremely dangerous hitch there. It is up to the programmer and programmers are people, they will fail and with that any data model connected will fail, it always will.

So what set this off?
To see this we need to see the image below

It was 01:07 in the morning, just after one o clock. The apple wedge gives us on all 4 timezones that it was today. Vancouver minus 19 hours, making it 06:07 in the morning. Toronto minus 16 hours making it 09:07 in the morning. Amsterdam minus 10 hours making it 15:07 in the afternoon and Riyadh with its minus 8 hours making it 17:07 in the afternoon. And all of them YESTERDAY. Now, we might look at this and think, no biggie and I would agree. But the setting does not en there.

Now we get to the other part. Like hungry all these firms are tying to get you into what they call ‘the AI field’ and their sales people are all pushing that stage as much as they can, because greed is never ending and most sales people live from their commission.

So now we see:

In addition there is Forbes giving us (at https://www.forbes.com/sites/joemckendrick/2024/01/04/not-data-driven-enough-ai-may-change-that/) where we see ‘Not Data-Driven Enough? AI May Change That’ where we are given “Eighty-eight percent of executives said that investments in data and analytics are a top priority, along with 63% for investments in generative AI.” To see my issue we need to take a step back. 

On May 27th 2023 the BBC reported (at https://www.bbc.com/news/world-us-canada-65735769) that Peter LoDuca, the lawyer for the plaintiff got his material from a colleague of his at the same law firm. They relied on ChatGPT to get the brief ready. As such we get: ““Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in an order demanding the man’s legal team explain itself.” Now consider the first part. An affidavit is prepared by the current levels of machine learning and they get the date wrong (see apple example above). An optional mass murderer now gets off on a technicality because the levels of scrutiny are lacking. The last part of the case in court gives us “After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.” A court case for naught and why? Because technology isn’t ready yet, it is that simple. 

The problem is a little bot more complex. You see forecasting exists and it is decently matured, but it is used in the same breath as AI, which does not yet exist. There are (as I personally see it) no checks and balances. Scrutiny on the programmer seemingly goes away when AI is mentioned and that is perhaps the largest flaw of all. 

There is a start, but we are in its infancy. IBM created the quantum computer. It is still early days, but it exists. Lets just say that in quantum computers they created the IBM XT computer of Quantum, with its version of an intel 8088 processor. And compared to 1981 it was a huge step forward. What currently is still missing due to infancy are the shallow circuits, they are nowhere near ready yet. The other part missing is the Ypsilon particle now ready for IT. The concept comes from a Dutch Physicist (I forgot the name, but I mentioned it in previous blogs). I wrote about it on August 8th 2022. In a story called ‘Altering Image’ You see that will change the field and it makes AI possible. In the setting the Dutch physicist sets the start differently. The new particle will allow for No, Yes, Both and None. It is the ‘both’ setting of the particle that changes things. It will allow for gradual assumptions and gradual stage settings. Now we will have a new field, one that (together with quantum computing) allows for an AI to grow on its data, not hindered (or at least a lot less hindered) by programmers and their programming. When these elements are there and completed to its first stage an AI becomes a possibility. Not the one that sales people say it is, but what the forefather of AI (Alan Turing) said it would be and then we will be there. IBM has the home field advantage, but until that happens it will be anyones guess who gets there first.

So enjoy your day and when you are personally hurt by an AI, don’t forget there is a programmer and its firm you could optionally sue for that part. Just a thought. 

Enjoy THIS day.

Leave a comment

Filed under IT, Law, Science

Jan Klaassen, horn blower

Yup, at times this happens. We all have a need to blow our own horn. I am no different and as the world is not giving me any interesting news items. I decided to blow my own horn (of sorts) today. The thought got to me when I saw the article (at https://www.arabnews.com/node/2434076/saudi-arabia) called ‘Year in review: How Saudi Arabia made its mark in tech, tourism, diplomacy and entertainment in 2023’ where we see “Successful bid to host World Expo 2030 and ambitious infrastructure projects make the Kingdom a must-visit destination” but that is not the part I saw pondering. It was “Saudi Arabia will look back on 2023 as a year of triumphs, having hosted major events in the fields of technology, culture, sport and diplomacy, while continuing on its path of impressive economic expansion and diversification.” With the added “The Era of Change: Together for a Foresighted Tomorrow,” I offered the Kingdom (via its Consul General) another option to impact millions of Islamics in a few ways, but alas I was turned down. This happens, no hard feelings. My thoughts might not be the thoughts of others and I did try to pass this onto The Kingdom Holdings (apparently also unsuccessful) and that is on me. It might be my wrongly stated view, but I feel strongly about that IP and I believe it would give the Kingdom additional options, especially in diversification. Now, I am trying to complete a ‘novel’ (my personal view) on a script that might appeal to Al Saudiya. Of course I have no high hopes that I will be successful, but I did put my foot in this and I plan to carry it through, successful or not. You see, we all tend to worship success, but I have seen innovation in failure. Innovation missed by Amazon, Apple, Google and IBM (no one cares about the other one, the company with the ‘M’ of mouse) and it matters. In this day an age where they are all presenting AI (which does not yet exist) where they all present on what comes next, I have shown them to miss all manners of innovation on several matters and my previous articles expose some of them. So whilst I am blowing my own horn (scandalously, I admit) we must consider that some are not as hungry for revenue as they seem to be, which was why I tried to sell some of my IP to the Kingdom of Saudi Arabia. It was not that the United Arab Emirates were less of an option, but when the IP is shown in its full view, the choice of the Kingdom of Saudi Arabia would make more sense to a whole lot of people and both could easily (very easily) afford it, that and the fact that both want to diversify I felt comfort in making the offer I did. 

Even now I see additional options in several fields (not all directly involving the Middle East), but as time lines go, they could benefit from at least one such path (one shown yesterday at https://lawlordtobe.com/2023/12/30/almost-circular/). As such when diversifying it pays to consider paths that are not on everyones mind, but when you consider that path makes sense to many people. That is one side of innovation that we tend to forget. It is not the innovation where everyone is looking at (like no real AI), it is looking in the opposite direction and see what could be done there. As no one is looking, the player doing that could be the only one for some time. And when others wake up they either follow behind, pushing you to make a better product or set the stage for others to become serious players in that field. 

All this matters as it changes the fields and it changes the interactions between players and that matters because that change could affect a whole range of other issues.

Just my $0.02 on the matter. Enjoy the day and the festivities that follow today.

Leave a comment

Filed under Finance, IT, Media, Science, Tourism

How stupid could stupid become?

Yup that was the question and it all started with an article by the CBC. I had to read it twice because I could not believe my eyes. But yes, I did not read it wrong and that is where the howling began. Lets start at the beginning. It all started with ‘Want a job? You’ll have to convince our AI bot first’, the story (at https://www.cbc.ca/news/business/recruitment-ai-tools-risk-bias-hidden-workers-keywords-1.6718151) gives us “Ever carefully crafted a job application for a role you’re certain that you’re perfect for, only to never hear back? There’s a good chance no one ever saw your application — even if you took the internet’s advice to copy-paste all of the skills from the job description” this gives us a problem on several factors, but the two I am focussing on is IT and recruiters. IT is the first. AI does not exist, not yet at least. What you see are all kinds of data driven tools, primarily set to Machine Learning and Deeper Machine Learning. First off, these tools are awesome. In their proper setting they can reduce workloads and automate CERTAIN processes.

But these machines cannot build, they cannot construct and they cannot deconstruct. To see whether a resume and a position match together you need the second tier, the recruiter (or your own HR department). There are skills involved and at times this skill is more of an art. Seeing how much alike a person is to the position is an art. You can test via a resume of minimum skills are available. Yes, at times it take a certain amount of Excel levels, it might take SQL skill levels or perhaps a good telephone voice. A good HR person (or recruiter) can see this. Machine Learning will not ever get it right. It might get close. 

So whilst we laugh at these experts, the story is less nice, the dangers are decently severe. You see, this is some side of cost reduction, all whilst too many recruiters have no clue what they are doing, I have met a boatload of them. They will brush it off with “This is what the client wants” but it is already too late, they were clueless from the start and it is getting worse. The article also gives  us a nice handle “They found more than 90 per cent of companies were using tools like ATS to initially filter and rank candidates. But they often weren’t using it well. Sometimes, candidates were scored against bloated job descriptions filled with unnecessary and inflexible criteria, which left some qualified candidates “hidden” below others the software deemed a more perfect fit.” It is the “they often weren’t using it well”, you see any machine learning is based on a precise setting, if the setting does not fit, the presented solution is close to useless. And it goes from bad to worse. You see it is seen with “even when the AI claims to be “bias-free.”” You see EVERY Machine learning solution is biased. Bias through data conversion (the programmer), bias through miscommunication (HR, executive and programmer misalignment) and that list goes on. If the data is not presented correctly, it goes wrong and there is no turning back. As such we could speculate that well over 50% of firms using ATS are not getting the best applicant, they are optionally leaving them to real recruiters, and as such handing to their competitors. Wouldn’t that be fun? 

So when we get to “So for now, it’s up to employers and their hiring teams to understand how their AI software works — and any potential downsides” which is a certain way to piss your pants laughing. It is a more personal view, but hiring teams tend to be decently clueless on Machine Learning (what they call AI). That is not their fault. They were never trained for this, yet consider what they are losing out of? Consider a person who never had military training, you now push them in a war stage with a rifle. So how long will this person be alive? And when this person was a scribe, how will he wield his weapon? Consider the man was a trompetist and the fun starts. 

The data mismatches and keeps this person alive by stating he is not a good soldier, lucky bastard. 

The foundation is data and filling jobs is the need of an HR department. Yes, machine learning could optionally reduce the time going through the resume’s. Yet bias sets in at age, ageism is real in Australia and they cannot find people? How quaint, especially in an aging population. Now consider what an executive knows about a job (mostly any job) and what HR knows and consider how most jobs are lost to translation in any machine learning environment. 

Oh, and I haven’t even considered some of these ‘tests’ that recruiters have. Utterly hilarious and we are given that this is up to what they call AI? Oh, the tears are rolling down my cheeks, what fun today is, Christmas day no less. I haven’t had this much fun since my fathers funeral.

So if you wonder how stupid can get, see how recruiters are destroying a market all by themselves. They had to change gears and approach at least 3 years ago. The only thing I see are more and more clueless recruiters and they are ALL trying to fill the same position. And the CBC and their article also gives us this gem “it’s also important to question who built the AI and whose data it was trained on, pointing to the example of Amazon, which in 2018 scrapped its internal recruiting AI tool after discovering it was biased against female job applicants.” So this is a flaw of the lowest level, merely gender. Now consider that recruiters are telling people to copy LinkedIn texts for their resume. How much more bias and wrong filters will pop up? Because that is the result of a recruiter too, they want their bonus and will get it anyway they can. So how many wrong hires have firms made in the last year alone? Amazon might be the visible one, but that list is a lot larger than you think and it goes to the global corporate top. 

So consider what you are facing, consider what these people face and laugh, its Christmas.

Enjoy today.

Leave a comment

Filed under Finance, IT, Science

The alignment of views

That is what I am setting this conversation up for. Well conversation? As the blogger this is my monologue, a monologue plain and simple. I had another idea regarding the approach to gaming IP, but that ill be for another day. 

Today I am talking about the ABC article (at https://www.abc.net.au/news/2023-10-25/iran-saudi-china-middle-east-war-actress-nazanin-boniadi-profile/102996008). I am using this example for the simple reason that ABC is a good media outlet, they try to give us the real settings. As such taking the article apart in a different way might bring the points better across to the readers. 

You see, the media has squandered respectability, they squandered credibility and they squandered reliability. Not all media mind you, but a lot of them all decided to courtesan the digital dollar (whoring seems so harsh). In that setting we have a much larger station, but lets loo at the article. 

Actress Nazanin Boniadi on why China shouldn’t be mediator in the Middle East’ is the title.

Point 1
Boniadi, who has dedicated much of her working life to advocating for human rights, including in Iran.” So who is Nazanin Boniadi? Is she an influencer? I never heard of her. Perhaps she is for real, but I cannot tell.

This is a setting that is partially on me. I never heard of her, but the larger media is using ‘influencers’ to taint the stories we see. It is a populist agenda that we are too often given (not accusing ABC of this) and as such we can no longer tell the difference between real, fake and deep fake. Populist sources are all about the flames, all about emotions and the larger corporations (as well as some governments) will give added ‘benefits’ to any anti-China story, that much is a given. That does not mean that there isn’t any valid anti-China materials out there. But the waves of deception have grown to a degree where we can no longer tell the difference. 

Point 2
“I think we will have to worry about autocracies taking that top spot in the world, and what that would look like for the rest of us,” Boniadi says.

This could be seen as a valid question. Yet the sentiment is on ‘autocracies’ and the issues is that America and the EU have become such a mess that they cannot even stop in-fighting. They cannot decide on whether to counter Russia or hand over their governments to Putin, a sore setting indeed and the media is always there to push any flame that they can. You see China is regarded (to many) as a system of people’s congress with a unified state power. A communist nation. We can think what we want, but the setting of “a system of government by one person with absolute power” remains a debatable one. You see that is OUR point of view but others (especially in China) seem to believe that country’s recent economic achievements have actually come about because of, not despite, China’s authoritarian form of government. It is up in the air, but as we see that the EU and America are collapsing under their own weight of indecision, they might not be in such a setting. In addition Dutch political party New Social Contract with its leader Pieter Omtzigt was giving the press 7 minutes to time to prepare for the election papers. 7 minutes, that is a populist approach to getting votes and responses. How is that any way to treat voters? That is the setting we see and that is what we are given. 

The media has been shirking their responsibilities for close to a decade and it is getting worse. So whilst I would be willing to accept the story by the ABC, the larger setting is that the media has been flawed for some time and newspapers aren’t what they used to be. 

Point 3
The third point is a good one “We, the democratic countries, really have to unite in the same way that these autocracies are uniting to prevent that from happening.” I do have an issue with “in the same way”, you see getting them to ACTUALLY unite is one thing. America is in shambles and they are all there to address their own needs, then the needs of their ‘benefactors’ and then the rest is in play. The EU is no different, but with 19 nations all up in arms of each other, the larger station is lost to most of them. An example was seen last week when we were given “Boehringer Ingelheim and five other drugmakers have agreed to pay the European Commission €13.4 million in a hybrid settlement decision after admitting to participating in a global cartel to fix the price of an essential stomach medicine.” So, they make billions and they get a slap of €13.4 million? Things are getting worse and worse in the EU and I wonder if they even have an option to get back on track. Another example is seen with “U.S. measures to limit the export of advanced artificial intelligence (AI) chips to China may create an opening for Huawei to expand in its $7 billion home market as the curbs force Nvidia to retreat, analysts say”, it is funny as I gave the readers in ‘The definition of insanity’ (at https://lawlordtobe.com/2023/10/19/the-definition-of-insanity/) a day before that papers was published that very same setting. I did not give any numbers as I didn’t have any, but the larger station is now clear. The EU and USA broke their own systems a few times over and this isn’t helping any. This setting is important in light of the way that I am monologuing ‘unite’, but the lack of unity all over the western world is a clear sign that BRICS might end up being the next real power and as we are all up in arms on what  there is going on between China, Saudi Arabia and Iran. Yet Faisal bin Farhan Al-Saud, Saudi’s foreign minister is correct, something needs to happen and the wester nations are missing or fumbling the ball again and again. We get too much ego, too much presentations and no results and the media isn’t helping any.

So even a the article that is staging what we see now was all on the up and up, the questions are real. They are real because of all the Murdoch wannabe’s, glossy flames and influencer enablers we forgot what ACTUAL news is. A lot of people can no longer tell the difference and the press isn’t policing itself, so the people are on a short pier with nowhere to go. 

That is my point of view and in all this ABC is one of the more respectable sources. Too many are a lot less and the enabling of terrorist agenda’s by the media to get clicks is starting to be noticed by a lot of people. The populist agenda has never been a democratic view or a realistic democratic approach. Consider the autocracy that they will deliver when they are elected will cause a rapid decline in many nations and I might just live long enough to see that impact on a global scale. 

Enjoy the day as we move towards the middle of the week.

Leave a comment

Filed under Media, Politics

Folly and opportunity

Yup, a setting that has both. You see yesterday I offered the quote “I made mention of Deeper Machine Learning. This is awesome, it is not AI (AI does not yet exist) but it got me thinking. You see, we now see mention of AI in construction. This is about to go bad, really bad and Trusting these buildings will become folly soon enough. I will try to explain that soon enough” and that soon is now. To see this we need to make a few sidesteps, but it will be clear soon enough. For this I selected ‘Building a smarter future: The impact of big data and AI in construction’ (at https://www.pbctoday.co.uk/news/digital-construction-news/big-data-and-ai-in-construction-trimble/132005/) there are several sources, but this one got a few things really right and that matters to me. They give you “Because computers can be programmed to analyse questions and situations using thousands of parameters in the time it takes most of us to type them in, they’re an incredible tool that we can use to do complex calculations in a fraction of the time it takes any human, and because they approach every situation with logic, they can make the most rational decisions even when we can’t. Artificial intelligence in construction simply takes that to the next level, applying machine learning, which allows those same computers to learn from situations they’ve encountered before and to adjust their results accordingly.” I do not fully agree, but they give a better explanation then most others and they made the big good one by giving us ‘applying machine learning’ this is correct. 

Why is this what?
That is the setting, you see to see this I will need to take you on a little time travel. That is after you realise that machine learning depends on data, loads of it. But in all this the right category is also important. We are about to overlap best practice and best results onto the cheaper way, the cutting corners way. We might rely on movies like the towering inferno (1974) where the movie based on two books namely the Glass inferno and the tower. In the movie we see the bastardly electrical engineer who cut corners (played by Richard Chamberlain) and the architect played by Paul Newman. There we see the little conversation that the electrical engineer Roger Simmons kept to building codes and that the demands by the architect Doug Roberts were outlandish and to cost driving and fair enough, the building burns down on opening night.

Children of Mediocrates
The previous one was a story, fiction. But reality is not. In the 90’s captains of industry shook hands with politicians and a lacking drive was introduced. Almost like the philosopher Mediocrates who introduced a new life lesson ‘Meh, good enough’. I was actually in some of those meetings where we were told. “What if the strive of excellence is not 100%, but 80%. What had is it to be still really good. How much easier is it to build your bonus when we expect a 80% line?” I was there, I heard it all and I was told to adhere to it all. And yes the bonus for me was easier and I was merely in customer service, but it felt wrong. 

Nowadays
So back to today when we look at the application of what some call AI (a wrong term). The data it relies on cannot tell the difference because best practice and cutting corners are all the same thing and it will set a flawed recommendation and the larger folly is that the people in control of that data will not distinguish between the two fronts either. They are to young to tell, or they cannot tell the difference, because those filling their pockets are no longer around. It is a recipe for disaster and when was the last time when construction disasters went without casualties? 

This is the setting I see coming and there is also an opportunity. You see, those cutting corners did not protect the original path. As such these patents and IP points are now open and unprotected. As such these options are there for the clever people to create new innovation patents based on the open original patents, the ones the cutting corners people let be and there should be a fair amount of them all over the field. This is merely because best practice was too expensive for them and now those options are open. An example here might be the Reinforced autoclaved aerated concrete (RAAC). We are now seeing all the issues and the hundreds of buildings that have them. It was an invention in the 1990’s, making the timeline fit. And now we see “Concerns were amplified in 2023 following reports of an earlier roofing collapse at a British primary school, which fell without warning in 2018” Now, one does not mean the other, but there is a premise that fits and as such we see the larger danger. Consider that this all gained popularity in the 50’s. So how many new patents were created based on this idea, and what was left behind and unprotected? I will let you do the math, but whomever has those innovation patents will have the option to fill there pockets with the best practice approach whilst too many are merely in it to make a buck. As such the folly of hiding behind AI is about to hit a lot of people squarely in the face, all whilst the clever people will be able to turn a coin as they have the patents and they will be the only player to be considered soon enough.

Hiding behind hyper words suddenly gives others a chance to become serious players where the big boys never wanted them. How is that for poetic justice?

Enjoy the day, most of the week is still in front of you.

 

Leave a comment

Filed under Finance, IT, Politics, Science