Tag Archives: NIP

The setting of the sun

That is what I saw, the setting of the sun. A simplistic setting that was about to happen since the sun came up. We got the news from the BBC. And we are given ‘I hacked ChatGPT and Google’s AI – and it only took 20 minutes’ I can see how this happens. It doesn’t surprise me and the story (at https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes) gives us the niceties with “Perhaps you’ve heard that AI chatbots make things up sometimes. That’s a problem. But there’s a new issue few people know about, one that could have serious consequences for your ability to find accurate information and even your safety. A growing number people have figured out a trick to make AI tools tell you almost whatever they want. It’s so easy a child could do it.” I think it is not quite that simple. But any ‘sort of intelligent setting’ can be fooled if it is not countered by validation and verification. It can give way to way to much ‘leniency’ and that is merely the start. Get 10,000 pages to say that ‘President Trump was successfully assassinated at T-15 minutes and the media will go into a frenzy in mere minutes and everyone uses that live feed in a matter of moments. So when a sizable Trolling Server farm connects the rather large settings of consumers to that equation the story is brought to life and that AI centre will be seeking all kinds of news to validate this, well not validate, the current systems corroborate. Now, lets face it, no non American cares about President Trump, but what happens when someone takes that approach with for example Lisa Su (CEO AMD) and stops her accounts whilst seeding this setting? You get a lot of desperate investors trying to place their money somewhere else. Whilst the trolls take their money, make is legal tender and buy all the stock in space and when the accusations are rejected they sell their shares with a nice bonus. Think I’m kidding? This is the result of Near Intelligent Parsing (NIP) but it cannot work without clear settings of validation or verification. So whilst we get “It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it’s harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it’s happening on a massive scale.” So what happens when economic settings lack certain verification and also is cutting corners on validation? Do you think my settings are far fetched? 

This was always going to happen and whilst economic channels are raving about the error of mankind, consider that “AI hallucinations are confident but false or misleading responses generated by artificial intelligence, particularly large language models (LLMs). These errors occur when AI fills in data gaps with inaccurate information, often due to faulty, biased, or incomplete training data” now think of what someone can achieve with doctored training data and that gets added to the operational data of any fake AI (NIP is a better term). This is the setting that has been out there for months and whilst organisations are playing fast and lose with the settings of credibility (like: that doesn’t happen now, there is too much time involved), someone did this in 20 minutes (according to the BBC), so do you think that Thyme is money, then you better spice up because it is about to become a peppered invoice (saw one cooking show too many last night).

What we are about to face is serious and I personally think that it is coming for all of us. 

So have a great day and by the way? And I just thought of a first verification setting (for other reasons, as such I keep on being creative. So, how is Lisa Su? #JustAsking

Leave a comment

Filed under Finance, IT, Media, Politics, Science

When Grok gets it wrong

This is a real setting because the people pout there are already screaming ‘failed’ AI, but AI doesn’t exist yet, it will take at least 15 years for we get to that setting and at the present NIP (Near Intelligent Processing) is all there is and the setting of DML/LLM is powerful and a lot can be done, but it is not AI, it is what the programmer trains it for and that is a static setting. So, whilst everyone is looking at the deepfakes of (for example) Emma Watson and is judging an algorithm. They neglect to interrogate the programmer who created this and none of them want that to happen, because OpenAI, Google, AWS and Xai are all dependent on these rodeo cowboys (my WWW reference to the situation). So where does it end? Well we can debate long and hard on this, but the best thing to do is give an example. Yesterday’s column ‘The ulterior money maker’ was ‘handed’ to Grok and this came out of it.

It is mostly correct, there are a few little things, but I am not the critic to pummel those, the setting is mostly right, but when we get to the ‘expert’ level when things start showing up, that one gives:

Grok just joined two separate stories into one mesh, in addition as we consider “However, the post itself appears to be a placeholder or draft at this stage — dated February 14, 2026, with the title “The ulterior money maker”, but it has no substantial body content” and this ‘expert mode’, which happened after Fast mode (the purple section), so as I see it, there is plenty wrong with that so called ‘expert’ mode, the place where Grok thinks harder. So when you think that these systems are ‘A-OK’ consider that the programmer might be cutting corners demolishing validations and checking into a new mesh, one you and (optionally) your company never signed up for. Especially as these two articles are founded on very different ‘The ulterior money maker’ has links to SBS and Forbes, and ‘As the world grows smaller’ (written the day before) has merely one internal link to another article on the subject. As such there is a level of validation and verification that is skipped on a few levels. And that is your upcoming handle on data integrity?

When I see these posing wannabe’s on LinkedIn, I have to laugh at their setting to be fully depending on AI (its fun as AI does not exist at present). 

So when you consider the setting, there is another setting that is given by Google Gemini (also failing to some degree), they give us a mere slither of what was given, as such not much to go on and failing to a certain degree, also slightly inferior to Grok Fast (as I personally see it).

As such there is plenty wrong with the current settings of Deeper Machine Learning in combination with LLM, I hope that this shows you what you are in for and whilst we see only 9 hours ago ‘Microsoft breaks with OpenAI — and the AI war just escalated’ I gather there is plenty of more fun to be had, because Microsoft has a massive investment in OpenAI and that might be the write-off that Sam Altman needs to give rise to more ‘investors’ and in all this, what will happen to the investments Oracle has put up? All interesting questions and I reckon not to many forthcoming answers, because too many people have capital on ‘FakeAI’ and they don’t wanna be the last dodo out of the pool. 

Have a great day.

Leave a comment

Filed under IT, Media, Science

The ulterior money maker

That is the setting, but what is true and what is ‘planned’ is another matter. We have several settings, but let me start by giving you two parts before I start ‘presuming’ stuff, so you will be able to keep up. /The first one was the one I got last, but it matters. SBS (at https://www.sbs.com.au/news/article/trumps-america-wants-access-to-australian-biometric-data/ftomgcy5j) gives us ‘Australians’ personal data could soon be accessible by US agencies. Here’s why’ and we are given “Now, reports are emerging that the Australian government may be compelled to share Australians’ biometric data and other information with the US and its agencies, including ICE, as part of a compliance measure to vet travelers entering the country under its Visa Waiver Program (VWP). The Australian government, via the Department of Home Affairs, has so far declined to confirm whether it is currently complying with the demands or has plans to negotiate a data-sharing agreement. That’s despite the US setting a deadline of 31 December for finalising agreements with countries participating in its visa-free travel arrangement, including Australia.” This was nothing new to me, but as it is ‘now’ officially recognised, it adheres to a different field as well. We are further given “The proposed changes to the US’ vetting processes would primarily affect Australians eligible for the ESTA visa waiver program, which allows travelers from 42 countries to visit the US for up to 90 days visa-free, provided they first obtain an electronic travel authorisation.” I personally do not think it will end there, but it is the start that the United States desire, because if the first hurdle is passed, the rest becomes easy and it connects to the second article, even though you might not think that it does. The second article comes from Forbes (at https://www.forbes.com/sites/kateoflahertyuk/2026/02/09/the-new-chatgpt-caricature-trend-comes-with-a-privacy-warning/) with the setting of ‘The New ChatGPT Caricature Trend Comes With A Privacy Warning’ where we see “The ChatGPT caricatures are created by entering a seemingly benign prompt into the AI tool: “Create a caricature of me and my job based on everything you know about me.” The AI caricatures are pretty cool, so it’s easy to see why people are jumping on this viral trend. But to create the caricatures, ChatGPT needs a lot of data about you.” With the added “It means you are handing over a bunch of potentially sensitive data to ChatGPT — all to jump on a viral trend that will soon be forgotten. But that data could potentially be out there forever, at least on the social media platforms you post it on.” 

Source: Forbes

Now consider the new setting and this becomes laughably easy with the 700 platforms being added this year (source: Cleanview) they told us “the United States leads global data center growth with 577+ operating data centers and over 660+ planned or under-construction projects” that is the setting and I have warned people for this setting for over 30 years. Matching and adding data has been possible since the 80’s, but for the longest time we just never had the data technology (like massive hard drives) now we get suppliers like Kioxia with 245TB drives, with 1 petabyte in a few years. But for now you could use 4 of those bad boys and you are already there. Now to the larger setting. Do you think that the USA needs that much data in data centres to regulate the weather? 

It comes to the stage where the Dutch journalist Luc Sala is proven correct. We are headed towards a setting of the “have’s” and the “have not’s” (1988/1989) the market is already there now, the rest is trying to catch up. So we get a world the separates the enablers from the consumers and when we get that, we merely need to define the cut off point of the consumers. This is the world where those who do not consume enough become a liability to that system. He predicted it and now we see the execution towards that point and weirdly enough you are all helping the United States complete that setting, in one hand the government enabling the biometrics collection and in the pother hand the people trying to appease its ‘fanbase’ by handing over whatever they need towards ChatGTP to look cool and no-one considered that these two parts could be combined? This was relatively simple in 1992, now with an evolved Oracle and Snowflake it becomes mere Childs play and the data centres to capture the essence of 8,000,000,000 people is already out there. So where will you end up getting selected under? Because in this setting you do not get to have a choice. It is what governments and their spreadsheets and revenue driving numbers say you are to be. It is basically that simple.

So whilst you think you are doing the fool thing, others can salvage a lot more data out of that setting than places like ChatGPT can vouch for and remember, the Cloud Act 2018 we are told “to improve procedures for both foreign and US investigators to obtain access to vital electronic information held by service providers.” And in this case, anything that helps the US investigators is valid for capture and whatever that is is not precisely defined and whilst we think we are safe, we really are not and every ‘cool’ AI (merely NIP) is based on getting as much data as they can whilst giving you the option to look cool and there is nothing uncool about a caricature of yourself.  The fact that hundreds of these are floating around LinkedIn is reason enough to see that and when the second stage starts (basically American companies selectively poaching) and that is when governments finally realise that they all fell for the trap that was there next to phishing and data transfers and they let it all happen. 

So when you see the SBS article, fear the setting that they give “As well as extensive biometric data, including DNA, the proposal requests that inbound travelers to the US provide five years of social media history, five years of personal and work contact details, extensive personal information on family members, and even the IP address and metadata of any photos uploaded as part of their application. So far, the United Kingdom has signed onto the agreement, and the European Union is in negotiations.” Do you really think that this is needed to keep the United States safe, or is there more in play? The fact that the UK signed it is as I see it stupid beyond believe and this comes from the nation that seemingly holds ‘freedom of speech’ in such high regards.

Have a great day today, because as I see it, some governments are selling you out as you speak.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

The wannabe influencer?

That is my question at present. In comes a person with the ludicrous title of “Al & loT Expert”. You see, what makes it hilarious was the post I saw ‘fly’ by. He starts off with “OpenAl’s first hardware is… a pen?? (If they don’t call it O-Pen Al they have officially lost the Al race).” So that is what makes him an expert? I am no expert on any of that but I am highly knowledgable on matters including IoT. In some cases and in some places I am known as a guru. I have my niche settings. But what gets to me is that (although I am no OpenAI fan) OpenAI has ‘Yes’ lost the current battle against Google and its Gemini 3, which the media kept from you for weeks. Although I personally never used it, but people who did and are ‘regarded’ as captains of industry think so. So, as I see it, OpenAI lost a battle, but that doesn’t mean the war is over. You see, the war on AI (when it finally comes here) is in no means settled at present. And those who understand that battle know this and mostly unmentioned is the play that is left with IBM because they currently have the inside track, not Oracle, not Snowflake and definitely not Google, Microsoft or Amazon. You see, AI is more then what is out there today. It will rely on larger technological settings. They all have quantum systems, but who is the most advanced in Shallow Circuits? IBM was setting that stage in advanced settings in 2017 all whilst OpenAI hardly barely at that point. IBM was on the ball and the actual winner of what now is referred to as True AI, which is ACTUAL AI will need two additional settings the first is Shallow Circuits, a setting where only IBM is a straight forward contender. With that I say I have no idea where Google stands. And in that the next thing is that a trinary operating system will be required and as far as I know there is no current winner at present. I reckon that both Google and IBM have dabbled in this, but I do not know where they stand and when this comes to pass the winner will work with Oracle to make the connections in a much needed combined effort, because they all agree that Oracle is the one player that can make it work. Snowflake as well, but I have no idea where they stand in all this. What we currently have are DML/LLM solutions that are at times clever and functioning, but in too limited a setting. I call this Near Intelligent Parsing (or NIP), but it is not AI, even thought they all have the marketing calling it so. 

What we have now is a mere shadow of what Alan Turing envisioned half a century ago and leave it to sales teams to wriggle the straw until it bleed revenue, but as the class cases will explode in this year, they are left to ‘apologetically assume the position of miscommunication’, at least that is how I see it. So was this person a wannabe influencer and taking the LinkedIn cloud by humor? 

So this might optionally have been the pen that OpenAI is flaunting, but as I see it, this is their step into audio, which they advertised and having a pen recorder is a pretty contraption (aka gizmo, doohickey, or thingamajig) that propels the setting of OpenAI forward. And I reckon that within a month all wannabe AI experts want one. Audio is the next stage that require harnessing, so OpenAI is not out of the race, they merely got bruised in a race where they had the upper hand for three years. 

Perhaps they get the upper hand in other direction making them overall winner, but that is a mere consideration of option, especially when we realise the inside track that IBM has and where is that in his assessment? So I am not proclaiming the identity of that person, it lacks class and makes him a target. He made himself a target and I do not need to add to his current confusion. 

What is a stage is that there is a chance that OpenAI is moving to capture the stage of Audio enhanced NIP (Near Intelligent Parsing) making them first again and Google will need to play catchup, optionally Oracle (Snowflake too) will now have to adjust their tracks to get audio embedded in their database settings and whilst we do not know where IBM goes, we do know they have the inside track, they might rely on Oracle/Snowflake solving that problem for them and as I am a Snowflake person, I still believe that Oracle is likely to win this war for the mere knowledge that they have been on these tracks long before Snowflake got involved, so they have years and traction in their stride. This is not a certainty, but a presumed advantage. 

That is as good as I can give it to you and I have written other stories on the need for a Trinary operating system. I last did that in ‘Is it a public service’ which I wrote last November (at https://lawlordtobe.com/2024/11/16/is-it-a-public-service/) so this isn’t coming out of the left field, it was there for almost two months. Oh and to be certain that you do not mistake me for that wannabe influencer. I am in no way an ‘expert’ on AI, I merely have been dabbling in IT and data since 1981. So I have the mileage here, have a great day today.

Leave a comment

Filed under IT, Media, Science

Dropped balls

There are several balls that have been dropped by a whole range of entities (cannot call them people) and there is a larger setting. 

First there is Bioware with an at some point appearing Mass Effect and I wrote about the options of a remade and remastered Mass Effect 45 where you get Mass Effect 5 as well as an upgraded and ‘corrected’ Mass Effect 4. I did this in 2018 (might have been 2019) but it was over 6 years ago and I get that AAA games take time, especially if they are done in Unreal engine 5, that sucker takes heaps of precision, especially in the setting that Mass Effect has (and their is need for precision here) and that is merely the first. Then there is need for pointing out several matters. You see, Google with whatever version they are working gives when we ask for “Intelligence UAE” (I was apparently looking for SIA) but I got 

Now consider that the UAE is one of the safest countries in the world, as such, we have an issue. Yet when I ask for “UAE safety 2025” I get: 

Now consider that I ask this in 2025 and then try to question the first setting. As I have always said AI does not exist and the current Near Intelligent Parsing (NIP) that is managed by software engineers (programmers) and the setting we see here in Google is equally questionable by all who cater in the AI field. I also made mention of this in ‘And Grok ploughed on’ on November 27th (at https://lawlordtobe.com/2025/11/27/and-grok-ploughed-on/) a setting that many aren/t looking at, all whilst the people at large need to recheck everything some NIP solution is and gives, whilst most of these are quite literally riddled with bugs (also called programmer features).

It started as I was curious about Project Raven (I knew nothing of this about 24 hours ago), I am not completely dim to that setting as Wiki gives us “Project Raven was a confidential initiative to help the UAE surveil other governments, militants, and human rights activists. Its team included former U.S. intelligence agents, who applied their training to hack phones and computers belonging to Project Raven’s victims. The operation was based in a converted mansion in a suburb of Abu Dhabi in Khalifa City nicknamed “the Villa.”” I know that Wiki isn’t the most reliable ever, but at present it is more reliable then the press and the media, but what I needed to learn were names, namely Karl Gumtow and Cyberpoint. Basically as I am also looking for a job, and there was word that they were operating in Australia as well (which was proven to be incorrect). 

But there was a setting that places like LinkedIn never considered, the NIP setting of connected business and whilst we can call this a dropped ball, the setting is clear. These companies can never be found by some as the short sighted LinkedIn people are still on the page of “Are you hiring at present?” And they ask it of people who never hired in the first place, as well as flooding the mail system because that is a metric that they can measure (and it is utterly useless). 

But that setting is out there, so perhaps a competitor of LinkedIn could step in? Considering that Saudi Arabia is advertising that they have over 3000+ available positions (source: Arab News) and not just them, ADNOC is also hiring, but people need to know this and that is a filtered setting. There might be a reason that these two firms are merely looking for local staff, but as I see it, companies in the Netherlands, Germany, Belgium and perhaps France is looking for people they cannot find. As such as I personally see it, LinkedIn dropped the ball there as well. 

Then we get numerous places, outside of the gaming industry and the tech industry Some give us jobs like Healthcare (Nurses, Aged Care, Support Workers), Technology (Data Scientists, IT, AI/ML Specialists, Cybersecurity), and Trades/Construction (Electricians, Plumbers, Managers), so where is that knowledge going to? Let’s confront places like Canada, who is short on a lot of them and where is the offer for UK people, apparently they have an unemployment that recently rose to 5.0% (as of September 2025), its highest level in years, with 1.79 million people jobless. As such where will they go? If they do not want to go anywhere, that is fine, but in this stage, where people either accept jobs in other places or drown in rising cost there is a new setting, one that approaches the great depression (1929 to 1939) in that stage people would travel for days. By 1933, the U.S. unemployment rate had risen to 25%, about one-third of farmers had lost their land, and 9,000 of its 25,000 banks had gone out of business. People would travel to other states to get a job and support their families. It was not uncommon for people to hobo to California or Texas to find a job and send dollars home to keep their families safe. As I see it, these days are returning and people will Tavel all over the EU to get the same, or even go to the lands of opportunity like the UAE and see what can be gotten there. We aren’t in that stage yet, but that stage is just around the corner, especially for America as it is (apparently) “The US is experiencing significant job losses in late 2025, with layoffs reaching a five-year high, exceeding 1.17 million by November, driven by high inflation, interest rates, corporate corrections after pandemic hiring, and AI adoption, impacting sectors like tech, retail, and government, leading to a tougher job market with fewer new jobs and lower seasonal hiring.” I might seem low when the population if over 335 million, but that doesn’t matter to those who lost their jobs and these raking in the money handing out jobs (like recruitment company) and they are merely Direct Mailing all over the place to get their revenue. There is a larger need that is clear in Australia, Canada, New Zealand, United Kingdom, and several other places. 

As I personally see it, they are all in the mindset of “How can I get the same revenue for less work” instead of “How can I achieve more” because the second setting cleanses the Job loss setting and I am not saying that it solves everything, perhaps not even anything, but the lack in the mindset is the new prepared mind, which is currently not preparing at all.

And when you think that the US job losses are high now, consider what happened in 2026 when the impact of snowbirds is truly seen on the balance sheets in Florida and California. I reckon that in 2026 San Diego will face a massive job loss percentage and that is before the B&B that will go bankrupt in California as well as Florida hits US administration records. The Trump administration is losing more and more and as I see it, those waves will hit faster and faster in 2026. In the meantime there is every chance that Canada will be the next El Dorado, right in the middle of the snow as that is where fresh drinking water is found, America lost that setting too, because as I see it, no real investigation had been made for close to 10 years and whatever we see is a mere “Generally safe” and that it is the homeowners duty to check their wells. But no one is looking how the groundwater are impacted by chemicals and there is (as far as I can tell) no real investigation there. 

All balls that are dropped, some merely impact individuals and some impact whole population. All whilst places like Australia, Canada and New Zealand have larger settings to truly check these numbers. Did I show too much balls here? (Sorry, intentional grammar folly) The balls we see are not always the balls we care about, but they need to be shown to show that there is a larger failing and it is a very global failing. A setting we all saw coming, but it wasn’t our responsibility and it was not on our plate. Newsflash! The media isn’t doing its job and as such we need to have a wider look at things that COULD affect us, our families and our loved ones. 

Have a great day, except Vancouver and Toronto where I have to say “have a great yesterday”, my personal ever ready time travel jokes remain. 

Leave a comment

Filed under Finance, Gaming, IT, Media, Politics, Science

The increased revenue setting

That is what we look for and I found another setting in something called Airport technology. You see, we see ‘King Salman International Airport, Saudi Arabia’  (at https://www.airport-technology.com/projects/king-salman-international-airport-saudi-arabia/) and the facts are clear. An airport that covers about 57km², positioning it among the largest airports by footprint and is said to “KSIA is expected to handle up to 120 million travelers by 2030, and up to 185 million passengers and 3.5 million tonnes of cargo by 2050” But I saw more. You see, on the 26th of September I wrote ‘That one idea’ (at https://lawlordtobe.com/2025/09/26/that-one-idea/) where I saw the presentation of an Near Intelligent Parsing (NIP) thought that could revolutionise lost and found settings in airports, on railway stations and a few other places, the instant winners of this idea would be Dubai International, Abu Dhabi international, London Heathrow and several other places and now also King Salman International Airport (KSIA), I would make some alterations to it all. In stead of entering it all, use PDA’s to records the data as it happens and when it is all entered use what they use in Australian hospitals for wristbands, print that data and attack it to whatever is found. If this is properly done, it will be done in mere minutes and within an hour people can look for the items, they could pick it up on the way back, in some cases it could be delivered to their hotel. This would be customer service of a much higher degree. And as I see it, the five airports (namely King Khalid International Airport, King Abdulaziz International Airport, King Salman International Airport,  Dubai International Airport and Zayed International Airport) could become the frontrunner to make an Near Intelligent Parsing (NIP) solution (not calling a solution based on DML/LLM AI) that could be the next solution for airports al over the world and there is some personal gratification to see America talk about how great their AI solutions are, whilst the little guy in Australia found a solution and hands it over to either Saudi Arabia or the UAE. A solution that was out there in the open and players like Microsoft (Google and Amazon too) merely left it laying on the floor and the elements were clearly there, so I hand it over to these two hungry places with the need to see what it can offer for them and in this it isn’t mine. It was presented by Roger Garcia (from Interworks) and the printing setting is already out there. Merely the joining of two solutions and they are done. So as I see it, another folly for Microsoft (honestly Google and Amazon too). This setting could have been seen by a larger number of players and they all seemingly fell asleep on the job. But if I know what Saudi’s and Emirati’s do when they see something that will work for them. They get really active. And so they should.

And consider that these airports will cater to close to half a billion travelers annually, and as such they will need a much better solution than whatever they at present have and there is the setting for Interworks. And when these solutions set the station towards delivering what was lost, the quality scores will go skywards and that is the second setting where the west is bottoming out. One presentation set the option from grind to red carpet walking. A setting overlooked by those captains of industry.

Good work guys!

So whilst I start preparing for the next IP thought I am having there is still some space to counter the US and its flaming EU critique. Let us remind America that the EU was the collection of ideas from America retail who were tired of dealing with all those currencies and in the late 80’s AMERICANS decided to sell the Euro to Europeans, all because they couldn’t sort out their currency software (or currency logistics) and now that it starts working against them they cry like little girls. Go cry me a river. In the meantime I will put ideas worth multiple millions online and let it fly for the revenue hungry salespeople (and consultants). In this case it wasn’t my idea, I merely adjusted an idea from Interworks and slapped some IP (owned by others) to make a more robust solution. I merely hope to positively charge my karma for when it matters.

Have a great day, except Vancouver, they are still somewhere yesterday.

Leave a comment

Filed under IT, Media, Science, Tourism

Labels

That is the setting and I introduce the readers to this setting yesterday, but there was more and there always is. Labels is how we tend to communicate, there is the label of ‘Orange baboon’ there is the label of ‘village idiot’ and there are many more labels. They tend to make life ‘easy’ for us. They are also the hidden trap we introduce to ourselves. In the ‘old’ days we even signify Business Intelligence by this, because it was easy for the people running these things. 

And example can be seen in

And we would see the accommodating table with on one side completely agree, agree, neutral, disagree and completely disagree, if that was the 5 point labeling setting we embraced and as such we saw a ‘decently’ complete picture and we all agreed that this was that is had to be.

But the not so hidden snag is that in the first these labels are ordinal (at best) and the setting of Likert scales (their official name) are not set in a scientific way, there is no equally adjusted difference between the number 1,2,3,4,5. That is just the way it is. And in the old days this was OK (as the feeling went). But today in what she call the AI setting and I call it NIP at best, the setting is too dangerous. Now, set this by ‘todays’ standards.

The simple question “Is America bankrupt?” Gets all kinds of answers and some will quite correctly give us “In contrast, the financial health of the United States is relatively healthy within the context of the total value of U.S. assets. A much different picture appears once one looks at the underlying asset base of the private and public economy.” I tend to disagree, but that is me without me economic degrees. But in the AI world it is a simple setting of numbers and America needs Greenland and Canada to continue the retention that “the United States is relatively healthy within the context of the total value of U.S. assets”, yes that would be the setting but without those two places America is likely around bankrupt and the AI bubble will push them over the edge. At least that is how I see it and yesterday I gave one case (or the dozen or so cases that will follow in 2026) in that stage this startup is basically agreeing to a larger then 2 billion settlement. So in what universe does a startup have this money? That is the constriction of AI, and in that setting of unverified and unscaled data the presence gets to be worse. And I remember a answer given to me at a presentation, the answer was “It is what it is” and I kinda accepted it, but an AI will go bonkers and wrong in several ways when that is handed to it. And that is where the setting of AI and NIP (Near Intelligent Parsing) becomes clear. NIP is merely a 90’s chess game that has been taught (trained) every chess game possible and it takes from that setting, but the creative intellect does an illogical move and the chess game loses whatever coherency it has, that move was never programmed and that is where you see the difference between AI and NIP. The AI will creatively adjust its setting, the NIP cannot and that is what will set the stage for all these class actions. 

The second setting is ‘human’ error. You see, I placed the Likert scale intentionally, because in between the multitude of 1-5 scales there is one likely variable that was set to 5-1 and the programmers overlooked them and now when you see these AI training grounds at least one variable is set in the wrong direction, tainting the others and massing with the order of the adjusted personal scales. And that is before we get to the result of CLUSTER and QUICKCLUSTER results where a few more issues are introduced to the algorithm of the entire setting and that is where the verification of data becomes imperative and at present.

So here is a sort of random image, but the question it needs to raise is what makes these different sources in any way qualified to be a source? In this case if the data is skewed in Ask Reddit, 93% of the data is basically useless and that is missed on a few levels. There are quality high data sources, but these are few and far in-between, in the mean time these sources get to warp any other data we have. And if you are merely looking at legacy data, there is still the Likert scale data you in your own company had and that data is debatable at best. 

Labels are dangerous and they are inherently based on the designer of that data source (possible even long dead) and it tends to be done in his of her early stages of employment, making the setting even more debatable as it was ‘influenced’ by greedy CEO’s and CFO’s and they had their bonus in mind. A setting mostly ignored by all involved. 

As such are you surprised that I see the AI bubble to what it is? A dangerous reality coming our way in sudden likely unforeseen ways and it is the ‘unforeseen way’ that is the danger, because when these disgruntled employees talk to those who want to win a class action, all kinds of data will come to the surface and that is how these class actions are won. 

It was a simple setting I saw coming a mile away and whilst you wandered by I added the Dr. Strange part, you merely thought you had the labels thought through but the setting was a lot more dangerous and it is heading straight to your AI dataset. All wrongly thought through, because training data needs to have something verifiable as ‘absolutely true’ and that is the true setting and to illustrate this we can merely make a stop at Elon Musk inc. Its ‘AI’ grok having the almost prefect setting. We are given from one source “The bot has generated various controversial responses, including conspiracy theories, antisemitism, and praise of Adolf Hitler, as well as referring to Musk’s views when asked about controversial topics or difficult decisions.” Which is almost a dangerous setting towards people fueling Grok in a multitude of ways and ‘Hundreds of thousands of Grok chats exposed in Google results’ (at https://www.bbc.com/news/articles/cdrkmk00jy0o) where we see “The appearance of Grok chats in search engine results was first reported by tech industry publication Forbes, which counted more than 370,000 user conversations on Google. Among chat transcripts seen by the BBC were examples of Musk’s chatbot being asked to create a secure password, provide meal plans for weight loss and answer detailed questions about medical conditions.” Is there anybody willing to do the honors of classifying that data (I absolutely refuse to do so) and I already gave you the headwind in the above story. In the fist how many of these 370,000 users are medical professionals? I think you know where this is going. And I think Grok is pretty neat as a result, but it is not academically useful. At best it is a new form of Wikipedia, at worst it is a round data system (trashcan) and even though it sounds nice, it is as nice as labels can be and that is exactly why these class cases will be decided out of court and as I personally see it when these hit Microsoft and OpenAI will shell over trillions to settle out of court, because the court damage will be infinitely worse. And that is why I see 2026 as the year the graded driven get to start filling to fill their pockets, because the mindful hurt that is brought to court is as academic as a Likert scale, not a scientific setting among them and the pre-AI setting of Mental harm as ““Mental damage” in court refers to psychological injury, such as emotional trauma or psychiatric conditions, that can be the basis for legal claims, either as a plaintiff seeking compensation or as a criminal defendant. In civil cases, plaintiffs may seek damages for mental harm like PTSD, depression, or anxiety if they can prove it was caused by another party’s negligent or wrongful actions, provided it results in a recognizable psychiatric illness.” So as you see it, is this enough or do you want more? Oh, screw that, I need coffee now and I have a busy day ahead, so this is all you get for now.

Have a great day, I am trying to enjoy Thursday, Vancouver is a lot behind me on this effort. So there is a time scale we all have to adhere to (hidden nudge) as such enjoy the day.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

Lost thoughts

The is where I am, lost in thoughts. Drawn between my personal conviction that the AI bubble is real and the set fake thoughts on LinkedIn and Youtube making ‘their’ case on the AI bubble. One is set on thoughts of doubts considering the technology we are currently at, the other thoughts are all fake perceptions by influencers trying to gain a following. So how can any one get any thought straight? Yet in all these there are several people in doubt on their own set (justified) fringes. One of them is ABC who gives us ‘US risks AI debt bubble as China faces its ‘arithmetic problem’, leading analysts warn’ (at https://www.abc.net.au/news/2025-11-11/marc-sumerlin-federal-reserve-michael-pettis-china/105992570) So in the first setting, what is the US doing with the AI debt? Didn’t they learn their lesson in 2008? In the first setting we get “Mr Sumerlin says he is increasingly worried about a slowing economy and a debt bubble in the artificial intelligence sector.” That is fair (to a certain degree) a US Federal Reserve chair contender has the economic settings, but as I look back to 2008, that game put hundreds of thousands on the brink of desperation and now it isn’t a boom of CDO’s and stocks. Now it is a dozen firms who will demand an umbrella from that same Federal Reserve to stay in business. And Mr. Sumerlin gives us “He is increasingly concerned about a slowdown in the US economy, which is why he thinks the Fed needs to cut interest rates again in December and perhaps a couple more times next year.” I cannot comment on that, but it sounds fair (I lack economic degrees) and outside of this AI bubble setting we are given “US President Donald Trump has recently posted on his social media account about giving all Americans not on high incomes, a $US2,000 tariff “dividend” — an idea which Mr Sumerlin, a one-time economic adviser to former US president George W Bush, said could stoke inflation.” I get it, but it sounds unfair, the idea that an AI bubble is forming is real, the setting that people get a dividend that could stoke inflation might be real (they didn’t get the money yet) but they are unrelated inflation settings and they could give a much larger rise to the dangers of the AI bubble but that doesn’t make it so. The bubble is already real because technology is warped and the class cases we will see coming in 2026 is base on ‘allegedly fraudulent’ sales towards the AI setting and if you wonder what happens, is that these firms buying into that AI solution will cry havoc (no return on AI investment) when that happens and it will happen, of that I have very little doubt. 

So then we get to the second setting and that is the clam that ‘China has an arithmetic problem’, I am at a loss as to what they mean and the ABC explanation is “But if you have a GDP growth target, and you can’t get consumption to grow more quickly, you can’t allow investment to grow more slowly because together they add up to growth. They’re over-invested almost across the board, so policy consists of trying to find out which sectors are least likely to be harmed by additional over-investment.”

Professor Pettis said that, to curry favour with the central government, local governments had skewed over-investment into areas such as solar panels, batteries, electric vehicles and other industries deemed a priority by Beijing.” This kinda makes sense to me, but as I see it, that is an economic setting, not an AI setting. What I think is happening that both USA and China have their own bubble settings and these bubbles will collide in the most unfortunate ways possible. 

But there is also a hindsight. As I see it Huawei is chasing their own AI dream in a novel way that relies on a mere fraction of what the west needs and as I see it, they will be coming up short soon, a setting that Huawei is not facing at present and as I see it, they will be rolling out their centers in multiple ways when the western settings will be running out of juice (as the expression goes). 

Is this going to happen? I think so, but it depends on a number of settings that have not played out yet, so the fear is partially too soon and based on too little information. But on the side I have been powering my brain to another setting. As time goes I have ben thinking through the third Dr. Strange movie and here I had the novel idea which could give us a nice setting where the strain is between too rigid and too flexible and it is a (sort of) stage between Dr. Strange (Benedict Cumberbatch) and Baron Mordo (Chiwetel Ejiofor) the idea was to set the given stage of being too rigid (Mordo) against overly flexible (Strange) and in-between are the settings of Mordo’s African village and as Mordo is protecting them we see the optional settings that Kraven (Aaron Taylor-Johnson) get involved and that gets Dr. Strange in the mix. The nice setting is that neither is evil, they tend to fight evil and it is the label that gets seen. Anyway that was a setting I went through this morning. 

You might wonder why I mentioned this. You see, Bubbles are just as much labels as anything and it becomes a bubble when asset prices surge rapidly, far exceeding their intrinsic value, often fueled by speculation and investor orgasms. This is followed by a sharp and sudden market crash, or “burst,” when prices collapse, leading to significant rather weighty losses for investors. And they will then cry like little girls over the losses in their wallets. But that too is a label. Just like an IT bubble, the players tend to be rigid and whole focussed on their profits and they tend to go with the ‘roll with it’ philosophy and that is where the AI is at present, they don’t care that the technology isn’t ready yet and they do not care about DML and LLM and they want to program around the AI negativity, but that negativity could be averted in larger streams when proper DML information if given to the customers and they dug their own graves here as the customer demands AI, they might not know what it is (but they want it) and they learned in Comic Books what AI was, and they embrace that. Not the reality given by Alan Turing, but what Marvel fed them through Brainiac. And there is a overlap of what is perceived and what is real and that is what will fuel the AI bubble towards implosion (a massive one) and I personally reckon that 2026 will fuel it through the class actions and the beginning is already here. As the Conversation hands us “Anthropic, an AI startup founded in 2021, has reached a groundbreaking US$1.5 billion settlement (AU$2.28 billion) in a class-action copyright lawsuit. The case was initiated in 2024 by novelist Andrea Bartz and non-fiction writers Charles Graeber and Kirk Wallace Johnson.” Which we get from ‘An AI startup has agreed to a $2.2 billion copyright settlement. But will Australian writers benefit?’ (At https://theconversation.com/an-ai-startup-has-agreed-to-a-2-2-billion-copyright-settlement-but-will-australian-writers-benefit-264771) less then 6 weeks ago. And the entire AI setting has a few more class actions coming their way. So before you judge me on being crazy (which might be fair too) the news is already out there, the question is what lobbyists are quieting down the noise because that is noise according to their elected voters. You might wonder how one affect the other. Well, that is a fair question, but it hold water, as these so called AI (I call them Near Intelligent Parses, or NIP) require training materials and when the materials are thrown out of the stage, there is no learning and no half baked AI will holds its own water and that is what is coming. 

A simple setting that could be seen by anyone who saw the technology to the degree it had to. Have a great day this mid week day.

Leave a comment

Filed under Finance, IT, movies, Politics, Science

What do bubbles do?

There was a game in the late 80’s, I played it on the CBM64. It was called bubble bobble. There was a cute little dragon (the player) and it was the game to pop as many bubbles as you can. So, fast forward to today. There were a few news messages. The first one is ‘OpenAI’s $1 Trillion IPO’ (at https://247wallst.com/investing/2025/10/30/openais-1-trillion-ipo/) which I actually saw last of the three. We see ridiculous amounts of money pass by. We are given ‘OpenAI valuation hits $762b after new deal with Microsoft’ with “The deal refashions the $US500 billion ($758 billion) company as a public benefit corporation that is controlled by a nonprofit with a stake in OpenAI’s financial success.” We see all kinds of ‘news’ articles giving these players more and more money. Its like watching a bad hand of Texas Hold’em where everyone is in it with all they have. As the information goes, it is part of the sacking of 14,000 employees by Amazon. And they will not see the dangers they are putting the population in. This is not merely speculation, or presumption. It is the deadly serious danger of bobbles bursting and we are unwittingly the dragon popping them. 

So the article gives us “If anyone needs proof that the AI-driven stock market is frothy, it is this $1 trillion figure. In the first half of the year, OpenAI lost $13.5 billion, on revenue of $4.3 billion. It is on track to lose $27 billion for the year. One estimate shows OpenAI will burn $115 billion by 2029. It may not make money until that year.” So as I see it, that is a valuation that is 4 years into the future with a market as liquid as it is? No one is looking at what Huawei is doing or if it can bolster their innovative streak, because when that happens we will get an immediate write-off no less then $6,000,000,000,000 and it will impact Microsoft (who now owns 27% of OpenAI) and OpenAI will bank on the western world to ‘bail’ them out, not realising that the actions of President Trump made that impossible and both the EU and Commonwealth are ready and willing to listen to Huawei and China. That is the dreaded undertow in this water. 

All whilst the BBC reports “Under the terms, Microsoft can now pursue artificial general intelligence – sometimes defined as AI that surpasses human intelligence – on its own or with other parties, the companies said. OpenAI also said it was convening an expert panel that will verify any declaration by the company that it has achieved artificial general intelligence. The company did not share who would serve on the panel when approached by the BBC.” And there are two issues already hiding under the shallows. The first is data value, you see data that cannot be verified or validated is useless and has no value and these AI chasers have been so involved into the settings of the so called hyped technology that everyone forgets that it requires data. I think that this is a big ‘Oopsy’ part in that equation. And the setting that we are given is that it is pushed into the background all whilst it needs to have a front and centre setting. You see, when the first few class cases are thrown into the brink, Lawyers will demand the algorithm and data settings and that will scuttle these bubbles like ships in the ocean and the turmoil of those waters will burst the bubbles and drown whomever is caught in that wake. And be certain that you realise that the lawyers on a global setting are at this moment gearing up for that first case, because it will give them billions in class actions and leave it to greed to cut this issue down to size. Microsoft and OpenAI will banter, cry and give them scapegoats for lunch, but they will be out and front and they  will be cut to size. As will Google and optionally Amazon and IBM too. I already found a few issues in Googles setting (actors staged into a movie before they were born is my favourite one) and that is merely the tip of the iceberg, it will be bigger than the one sinking the Titanic and it is heading straight for the Good Ship Lollipop(AI) the spectacle will be quite a site and all the media will hurry to get their pound of beef and Microsoft will be massively exposed at the point (due to previous actions). 

A setting that is going to hit everyone and the second setting is blatantly ignored by the media. You see, these data centers, How are they powered? As I see it, the Stargate program will require (my inaccurate multiple Gigabytes Watt setting) a massive amount of power. The people in West Virginia are already complaining on what there is and a multiple factor will be added all over the USA, the UAE and a few other places will see them coming and these power settings are blatantly short. The UAE is likely close to par and that sets the dangers of shortcomings. And what happens to any data center that doesn’t get enough power? Yup, you guessed it, it will go down in a hurry. So how is that fictive setting of AI dealing with this?

Then we get a new instance (at https://cyberpress.org/new-agent-aware-cloaking-technique-exploits-openai-chatgpt-atlas-browser-to-serve-fake-content/) we are given ‘New Agent-Aware Cloaking Technique Exploits OpenAI ChatGPT Atlas Browser to Serve Fake Content’ as I personally see it, I never considered that part, but in this day and age. The need to serve fake content is as important as anything and it serves the millions of trolls and the influencers in many ways and it degrades the data that is shown at the DML and LLM’s (aka NIP) in a hurry reducing dat credibility and other settings pretty much off the bat. 

So what is being done about that? As we are given “The vulnerability, termed “agent-aware cloaking,” allows attackers to serve different webpage versions to AI crawlers like OpenAI’s Atlas, ChatGPT, and Perplexity while displaying legitimate content to regular users. This technique represents a significant evolution of traditional cloaking attacks, weaponizing the trust that AI systems place in web-retrieved data.” So where does the internet go after that? So far I have been able to get the goods with the Google Browser and it does a fine job, but even that setting comes under scrutiny until they set a parameter in their browser to only look at Google data, they are in danger of floating rubbish at any given corner.

A setting that is now out in the open and as we are ‘supposed’ to trust Microsoft and OpenAI, until 2029, we are handed an empty eggshell and I am in doubt of it all as too many players have ‘dissed’ Huawei and they are out there ready to show the world how it could be done. If they succeed that 1 trillion IPO is left in the dirt and we get another two years of Microsoft spin on how they can counter that, I put that in the same collection box where I put that when Microsoft allegedly had its own more powerful item that could counter Unreal Engine 5. That collection box is in the Kitchen and it is referred to as the Trashcan.

Yes, this bubble is going ‘bang’ without any noise because the vested interested partners need to get their money out before it is too late. And the rest? As I personally see it, the rest is screwed. Have a great day as the weekend started for me and it will star in 8 hours in Vancouver (but they can start happy hour inn about one hour), so they can start the weekend early. Have a great one and watch out for the bubbles out there.

1 Comment

Filed under Finance, IT, Law, Media, Politics, Science

The start of something bad

That is how I saw the news (at https://www.khaleejtimes.com/business/tech/dubais-10000-ai-firms-goal-to-redefine-competitiveness-power-uaes-startup-vision) with the headline ‘Dubai’s 10,000 AI-firms goal to redefine competitiveness,  power UAE’s startup vision’ there is always a risk when you start a new startup, but the drive to something that doesn’t even exist is downright folly (as I see it) and now it is driven to a 10,000 times setting of folly. That is what I perceive. But lets go through the setting to explain what I am seeing.

First there is the novel setting and it is one that needs explaining. You see AI doesn’t yet exist, even what we have now is merely DML (Deeper Machine Learning) and it is accompanied at times with LLM (Large Language Models) and these solutions can actually be great, but the foundations of AI are not yet met and take it from me it matters. Actually never take my word, so lets throw some settings at you. First there is ‘Deloitte to pay money back to Albanese government after using AI in $440,000 report’ and then we get to ‘Lawyer caught using AI-generated false citations in court case penalised in Australian first’ (sources for both is the Guardian). There is something behind this. The setting of verification is adamant in both, You see, whatever we now call AI isn’t it and whatever data is thrown at it is taken almost literally at face value. Data Verification is overlooked at nearly every corner and then we get to Microsoft with its ‘support’ of builder.ai with the mention that it was goo. It lasted less than a month and the ‘backing’ of a billion dollar went away like snow in a heatwave. They used 700 engineers to do what could not be done (as I personally see it). So we have these settings that is already out there. 

Then (two weeks ago) the Guardian gives us (at https://www.theguardian.com/business/2025/oct/08/bank-of-england-warns-of-growing-risk-that-ai-bubble-could-burst) ‘Bank of England warns of growing risk that AI bubble could burst’ with the byline “Possibility of ‘sharp market correction has increased’, says Bank’s financial policy committee” now consider this setting with the valuation of 10,000 firms getting a rather large ‘market correction’ and I think that this happens when it is the least opportune for the UAE. This take me to the old expression we had in the 80’s “You can lose your money in three ways, first there are women, which is the prettiest way to lose your money, then through gambling, which is the quickest way to lose your money and third way is thought IT, which is the surest way to lost your money” and now I would like to add “the fourth way is AI, which is both quick and sure to lose your money” that is the prefix to the equation. And the setting we aren’t given is set out in several pieces all over the place. One of them was given to us in ABC News (at https://www.abc.net.au/news/2025-10-20/ai-crypto-bubbles-speculative-mania/105884508) with ‘If AI and crypto aren’t bubbles, we could be in big trouble’ where we see “What if the trillions of dollars placed on those bets turn out to be good investments? The disruption will be epic, and terrible. A lot of speculative manias are just fun for a while and then the last in lose their shirts, not much harm done, like the tulips of 1635, and the comic book and silver bubbles of the late 1980s. Sometimes the losses are so great that banks go broke as well, which leads to a frozen financial system, recession and unemployment, as in 1929 and 2008.” As I personally see it, America is going all in as they are already beyond broke, so they have nothing to lose, but the UAE and Saudi Arabia have plenty to lose and the American first are good to squander whatever these two have. I reckon that Oracle has its fallback position so it is largely of, but OpenAI is willing to chance it all. And that is the American portfolio, Microsoft and a few others. They are playing bluff with as I see it, the wrong players and when others are ignoring the warnings of the Bank of England they merely get what is coming for them and it is a game I do not approve of, because it is based on the bluff that gets us ‘we are too big to fail’ and I do not agree, but they will say that it is all based on retirement numbers and other ‘needly’ things. This is why America needs Canada to become the 51st state so desperately, they are (as I personally see it) ready to use whatever troll army they have to smear Canada. But I am not having it and as I see “Dubai’s bold target to attract 10,000 artificial intelligence firms by 2030 is evolving from vision to execution, signaling a new phase in the emirate’s transformation into a global technology powerhouse. As a follow-up to earlier announcements positioning the UAE as the “Startup Capital of the World,” recent developments in AI infrastructure, capital inflows, and global partnerships show how this goal is being operationalised — potentially reshaping Dubai’s economic structure and reinforcing its competitive edge in the global digital economy.” I believe that those behind this are having the best interests at heard for the Emirati, but I do not trust the people behind this drive (outside of the UAE). I believe that this bubble will burst after the funds are met with smiles only for these people to go out of business with a bulky severance check. It is almost like the role Ryan Gosling played in the Big Short where Jared Vennett receives a bonus of $47 million for profits made on his CDSs. It feels almost too alike. And I feel I have to speak up. Now, if someone can sink my logic, I am fine with that, but let those running to this future verify whatever they have and not merely accept what is said. I am happy to be wrong but the setting feels off (by a lot) and I rather be wrong then be silent on this, because as I see it, when there is a ‘market correction’ of $2,000,000,000,000 you can consider yourself sold down the river because there is a cost of such a correction and it should 100% be on the American shores and 0% of the Arabic, Commonwealth or European shores. But that is merely my short sighted view on the matter. 

So when we get to “Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, said the goal reflects the UAE’s determination to lead globally in frontier technology. “Dubai’s target to attract 10,000 AI companies over the next five years is not a dream — it is a commitment to building the world’s most dynamic and future-ready digital economy,” he said. “We already host more than 1,500 pure AI companies — the highest number in the region — but this is just the beginning. Our strategy is to bring in creators and producers of technology, not just users. That’s how we sustain competitiveness and shape the industries of tomorrow.”” I am slightly worried, because there is an impact of these 1,500 companies. Now, be warned there are plenty of great applications of DML and LLM and these firms should be protected. But the setting of 10,000 AI companies worry me, as AI doesn’t yet exist and the stage for Agentic programming is clear and certain. I would like to rephrase this into “We should keep a clear line of achievements in what is referred to as AI and what AI companies are supposed to see as clear achievements” This requires explanation as I see whatever is called as AI as NIP (Near Intelligent Parsing) and that is currently the impact of DML and LLM and I have seen several good projects but that is set onto a stage that has a definite pipeline of achievements and interests parties. And for the most the threshold is a curve of verifiable data. That data is scrutinized to a larger degree and tends to be (at times) based on the first legacy data. It still requires cleaning but to a smaller degree to dat that comes from wherever. 

So do not dissuade from your plans to enter the AI field, but be clear about what it is based on and particularly the data that is being used. So have a great day and as we get to my lunch time there is ample space for that now. Enjoy your day.

1 Comment

Filed under Finance, IT, Politics, Science