Tag Archives: Artificial Intelligence

See Tea or else

Yes, I promised you a story full of intrigue, filled with bad Jedi and happy Sith only 20 hours ago. And here it comes (I’m watching Star Wars episode 2 at the moment). You see, there is a setting where we can watch the unfolding of what some laughingly call ‘Artificial Intelligence’ (it would be if it was designed by the CIA, but the American Administration is now in shutdown). To get there there are three parts. 

In part 1 we look at the ‘disinformation’ and here we see the parts that do not match. You see, Dab Mashed potatoes with unions were discontinued in both Coles and Woolworths. The IGA still has it as I was able to verify in person (I had to travel to Summer Hill for that). So this is part 1.

Now we get to the slightly better stuff. You see, some might think that combining DML with Predictive Analytics (some think it is AI) is a solution. You merely set this in a massive database and voila (a theatrical of ‘here it is’) and that was that. This is merely my version of what I think it is happening. 

You merely set the model on all the articles you have and you take settings of ‘minimum order size’ ‘estimated margin per item’ and a few other things and there you have a matrix showing the items that just don’t make the cut for your ‘predicted margin of profit’ model and they are ‘discontinued’. And it goes on for nearly all retail models, and it might be a consideration that this is a speculated idea of why PM Albanese invited Lulu into the mix against Coles, Woolworths, IGA and Aldi. I have no data on this, but I reckon it might be a reason that it stops the DML/Predictive Analytics madness. You see, there is a setting that it is folly to get any customer 100% happy (it really is), so these giants are heading for a mere 90% and they throw out the least margin articles out of their consideration, but there is a flaw, thrown out 10 articles is a start, but that leaves one less at 90% and 9 less at 1%, as such you have a base of 81%, so now we are off to the races. And as there is no substitute for added pressures, Lulu gets invited to Australia (in case the others went the way of the dodo, I meant Coles and Woolworths). There is no supporting evidence, so this is (highly) speculative. But there is another setting. You see, this solution requires programming skills and that is where ‘Accenture plans to boot staff it can’t train to use AI, 12,000 already culled’ comes in. This solution will require hundreds, if not thousands of people being reskilled and places like Accenture cannot do that, unless they trim the staff they have in several places. And 12,000 were ‘culled’ because it hinders their bottom line. To support this I give the following thoughts ‘What time was taken to assess a person whether he/she could be re-skilled?’ Who had the knowledge to assess this and what time frame was developed here? If this goes through it will mean a lot of engineers will be required in a short term setting.

And I merely used the Deb potato mash as an example, but what happens when it this pattern is released on pharmacy or other items? So whilst we might think that Accenture is dabbling in greed, the plain setting is that this is the direction that commerce is driving itself into. 

And this setting is about to be set on unverified data. Consider that Gemini AI had it wrong on Coles and Woolworths (see image), so what else did they get wrong and when that data is unverified how will the Predictive Analytics work with any level of accuracy? Mere simple questions at the top of my mind. And that was the setting of that ‘so called’ AI. 

Now, the setting is that parts of this are speculation, but does this make it wrong? It might be unverified, but the setting of the 12,000 culled into joblessness is recorded all over the media, and it is for the reason of ‘reskilling’ but what makes it impossible to reskill a person? As I see it, it is merely time and that is as I see it, time Accenture seemingly doesn’t have. And the setting of DML and Predictive Analytics? I see that as a limit towards viable data and that is the setting that plenty are ignoring. Some will ‘embrace’ the customer telling them that their data is awesome, but that is the second folly in this. Most of them are merely at the tally stage and their systems tend to come from legacy data, implying it is filled with holes and holes of non-data.

So think of this what you want, but the larger setting is about limiting YOUR ability to choose because it affects THEIR profit margins. Come to think of it, when was the last time you saw Sarsaparilla on the shelves of your supermarket? I remember a few years back there was Black knight licorice, where did that go? So think of all the things you liked and it is no longer there, why is that? Some are unviable as they cater to hundred of thousands of customers and they need to ‘adjust’ their stock accordingly. But what was denied to you? And the setting of adding predictive analytics to their profit mix is only making that worse for you. So what about part 3? Well that is where you the consumer comes in, it is what defines you, not what ‘their’ unverified data says you are. 

So have a think about what you are about to lose and have a great day and enjoy your next coffee, if only to force you to their brand of Nescafe.

Leave a comment

Filed under Finance, IT, Media, Politics, Science

The Delphi setting

That is always merely a breath away. At some point the decline of Oracle became a setting and the looting of the place by the Byzantine Constantine the Great contributed to the Demise of this place. But for the most part I have never heard that Oracle became a non issue. It always struck me weird that this never happened. Even today most of us call the givings of the gods ludicrous, or perhaps better as the Catholics might say sacrilege. Yet the power of the Oracle of Delphi has seemingly never waned to zero. 

This is the thought I had today as yesterday the news of Oracle was pushed to the core (mostly at Yahoo Finance) with all kinds of messages. We start with ‘Oracle (ORCL) Initiated at Sell by Rothschild Redburn, $175 Price Target Set’ and it is followed by “According to the firm, the market is materially overestimating the value of Oracle’s contracted cloud revenues. In big, single-tenant, large-scale deployments, the company acts more like a financier than a cloud provider, “with economics far removed from the model investors prize.”” As well as “Oracle’s five-year cloud revenue guidance is equal to $60B in value. This reflects that the market is already pricing in a “risky blue-sky scenario that is unlikely to materialize.”” My first issue is “Why?” You see, even as I do not trust (or believe) AI, its foundations is set on data as it always was set. Data is the holy grail of AI that much is certain and it will proceed to be for decades to come. So, who will you trust with your data? Microsoft with its Azure? As I see it Microsoft can’t see real innovation through the brushes of their own proclaimed innovation and as hackers proclaim that Israel is storing a particular form of its ‘defense’ data in Azure, there might be a security issue as well and that is a total blocker. There are good data solutions in Google, IBM and Amazon, but they all consider Oracle to be the Rolls Royce of data carriers. Then we get the next setting of ‘Nvidia And Oracle Headline 7 Promising Stocks With Mojo: Analysts’ and as they give us “What’s especially impressive is that these stocks are already up 30% or more this year. That blows away the 12.9% gain by the S&P 500 this year. So these are the big winners Wall Street still has high hopes for.” As such we see that in spite of all the stupidities the American political engine performs these two are kind of hot and it makes sense that they are, even if I have some reservations, there was never a doubt that Oracle could grow through it. Making the Statement from Rothschild debatable and me without economic degrees calling Rothschild on this is better then sex (even if Olivia Wilde would call on me in the next hour calling me a fucking tool, this is followed by a rather loud giggle by me). So when we get to ‘Why Oracle’s Cloud Computing Deals With Meta Platforms and OpenAI Make The “Ten Titans” Growth Stock a Top Buy Now’ A setting that the Motley Crew gives us (what do they know of IT?). We are given “the company announced plans to increase Oracle Cloud Infrastructure (OCI) revenue by more than 14-fold in five years. But that news proved to be just one splash amid a sea of waves. Reports indicate that Oracle and Meta Platforms are in talks on a $20 billion cloud computing deal. And Oracle and OpenAI are building on their $300 billion partnership with the rollout of five new data centers custom-built for artificial intelligence (AI).” No matter where they are, a setting of a 1400% revenue growth in 5 years is massive, unbelievable massive. Now, no matter how this turns, the one day lightbulb who believe in their AI settings will have to invest the money to make it work and that is the beginning of a setting where Oracle wins, no matter how that turns out. As such the AI wannabe’s are fueling the increase and funding the foundations of these data centers. And we are given “Google Cloud serve a variety of general compute customers. However, Oracle’s data centers are specifically designed for AI.

Oracle is a good example of why lacking a first-mover advantage isn’t a deal-breaker. Oracle’s data centers are newer and faster. And it’s bringing over 70 of them online in just a few years, which is why it expects OCI growth to reach an inflection point in fiscal 2027.” I reckon that it will serve several purposes, but it is more AI set than other centers. Although I have no real idea where Amazon and IBM stand. I reckon that Oracle could cater to the needs of Snowflake and allow its customers to grow their needs and it will do so a lot better than being a little IT guy Azure blue with questions. I saw the need for applications in the lost and found section that could grow adaptation by nearly all airports and when you are in, you are in. I reckon that Interworks should talk to adaptation Snowflake through Oracle, but that is just me.

Then we get an article that matters (at least it seems to). We are given ‘Analyst Says Oracle (ORCL) Deal With OpenAI is ‘Very Risky’ – ‘Not a Customer That Can Pay Their Obligations’’ and I see “One is if you go back to the transcripts from Oracle Corp (NYSE:ORCL) for the last few quarters, you’ll see that it’s not just the last deal from OpenAI that increased their backlog. It’s actually been several quarters where it’s really OpenAI that’s been driving all of this. Having that is the only thing that’s added value to Oracle Corp (NYSE:ORCL) is very risky. That’s not a customer that can pay all their obligations. They’re double, triple booking, maybe quadruple booking capacity. They will not be able to live to those obligations. So if you’re adding $400 billion of market cap to Oracle Corp (NYSE:ORCL) based on that, I think we should revisit the math.” OK, I am in (not knowing the math he talks about), and we see “OpenAI is expected to burn about $115 billion over the next four years and is not projected to be profitable until 2030. Even after Nvidia’s latest $100 billion investment by Nvidia, OpenAI will likely need to raise over $200 billion in total funding to cover its commitments. Some analysts believe Oracle may need to borrow tens of billions to build enough data centers for the deal.” OK, that sounds fair, but some seem to forget that Larry Ellison is worth 344,000 million (sounds much better then 344 billion) as such he can get those numbers without any question. And if he is right he will triple his value overnight as these data centers come online. And that is when the article shoots itself in the foot. They do it by giving us “While we acknowledge the potential of ORCL as an investment, our conviction lies in the belief that some AI stocks hold greater promise for delivering higher returns and have limited downside risk. If you are looking for an extremely cheap AI stock that is also a major beneficiary of Trump tariffs and onshoring, see our free report on the best short-term AI stock.” You see, no matter how great the idea is, it will still need data and Oracle is the best. They can side with fast talking sales people at Azure and see their projects fumble and watch delay after delay happen. As those promising returns fall to ash you can contemplate your choices. That being said, any AI idea is temporary at best, as such the investment in an Oracle engine seems a much better setting and these people have been in data for decades. As such I see the value and the foundation of Oracle, even if some do not or question the setting of Oracle. 

I wonder how Pythia sees my predictions and even as I am called ‘duly’ to serve Apollo (I serve Lord Hades in all things) the foundation of predictions is seemingly driven by personal insights and I have been at the foundations of data going back to 1982 so I do feel I am on the right track.

Have a great day and don’t forget to chew your laurel leaves, whether you are about to enjoy a coffee or not. Oh, get your coffee quick, the US government shuts down in 7.5 hours.

Leave a comment

Filed under Finance, IT, Media, Science

Focal points required

That is the setting I am having in 1 o’clock in the morning. The news (and the internet) is currently overloading with Jimmy Kimmel stories as well as vindictive settings against Disney and I get it. When the media who is trumpeting free speech is becoming the bitch of President Trump, people will not take kindly to this. Apparently the subscription servers at Disney went down as it was overloaded with cancellations (according to some sources). So I had to look all over the place on the settings of finding something to write about and Tom’s Hardware was one source who supplied the goods. The story (at https://www.tomshardware.com/tech-industry/artificial-intelligence/microsoft-announces-worlds-most-powerful-ai-data-center-315-acre-site-to-house-hundreds-of-thousands-of-nvidia-gpus-and-enough-fiber-to-circle-the-earth-4-5-times) gives us ‘Microsoft announces ‘world’s most powerful’ AI data center — 315-acre site to house ‘hundreds of thousands’ of Nvidia GPUs and enough fiber to circle the Earth 4.5 times’ and even as I don’t care too much about what happens in Wisconsin (other than the need to protect cheeses, I really like cheese) is the fact that when I see an article with that much data, I start looking for missing data, I am wired that way and it is less than 4.5 times around the planet.

But we got something, the setting is given with “This is likely a comparison to xAI’s Colossus, which uses over 200,000 GPUs and 300 megawatts of power. Microsoft didn’t specify its exact number of GPUs nor the expected power consumption.” And that is the ball game. You see, the setting of 300MW is not just a lot, it is the entire ballgame. Now, there is evidently enough power in Wisconsin, but is it enough? Consider a simple PC. It has a 600W power supply. Now this is not the same, but I am getting to that. Take 200 PC’s, that makes it 120,000 Watts of energy. Now consider that hundreds of PC’s are needed to even partially validate the data coming into that place. You need data verification spots to do that. The larger setting could be done by data entry people, people who go over the received data and they need to work quick, almost uninterrupted. As such the quote “Microsoft didn’t specify its exact number of GPUs nor the expected power consumption” is as I personally see it, massively deceptive. Just like the stage of Builder.ai where Microsoft set it to over a billion dollars and in months that money was gone, they apparently spend it on under 200 programmers (test engineers) and that is merely the start of it. And when we talk about enough fibre to circumvent the planet 4.5 times you get 57,402 km of fibre won’t that take any energy? The numbers aren’t adding up and even as Wisconsin has energy, there is every likelihood that they ‘suddenly’ have a shortage of energy. Oh, what a damn shame and the setting of any data centre is that in case of a shortage of energy it all ends right quick, the moment the surplus hits zero, the issues start and they will immediately escalate. 

Further down that page we see the mention of Elon Musk: “Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes”, well good luck with that idea. I am not saying it is impossible, but the setting of getting that all placed in a new location still requires a lot of concrete and not to mention the stage of the resources to get the plant going, so what is it? Gas, oil, coal, Uranium?

So what is fueling the Microsoft plant? And how much surplus energy will Wisconsin have left at that point? As I see it, there is a reason that Microsoft doesn’t give out the expected power consumption. And there are a few more items on that list, like validators (could be done remotely) so hundreds of people calling into that centre what drives the telecom settings? All issues that would have to be tackled on day one. 

As I see it, there is a lack of focal points, but as I see it, those who spin aren’t interested in that concept at all. Merely the floatation of the name in conjunction with “‘world’s most powerful’ AI data center”, didn’t Microsoft do this once before? Oh yes, the most powerful console in the world. How did that end with that Xbox series X? As far as I know it is trailing the weakest console (Nintendo Switch) by a lot and it is also trailing the PlayStation 5 a fair bit. So I am not keeping my hope up when Microsoft is juggling the setting “World’s most powerful…anything

But then I have seen them play these cards for almost 40 years. And they could have taken advice from IBM on certain matters, like “This page is intentionally kept blank

But that is just me.

The second setting is being pushed forward. I don’t want to write the wring thing and there are a few missing cogs in that story. Like the ‘new’ location on $4,300 billion retirement funds. And no one is talking so I have to dig.

Well, have a great day, time for Sunday to get a sun (in 4 hours) and consider looking around for freedom of speech, Disney seemingly can’t find it. 

Leave a comment

Filed under Finance, IT, Media, Science

The massive problem with AI

Yes, I have said that on several occasions, there is no AI and whatever there is has verification issues. Today I illustrate this YET again and here in this case Google is as much to blame as many others.

So we have two images, the first one gives us 

That there are risks. I was taken a little back, The UAE is one of the safest places on the planet. So I decided to ask the same question a little different and I added the term “in 2025” so as we see the second setting

We see the initial feeling I had about the country. And there are an abundance of articles showing the safety of the UAE (and Abu Dhabi), as such I want to kindly wake Sergey Bring the fuck up and I am wondering whether he needs to address his Gemini settings a little. Perhaps American tourism decline settings is altering the verification settings?

As such there is one little situation, the setting that whatever bigtech calls AI cannot be trusted (which I already knew). The setting of verification that is up and about and that is the major handle in whatever that (AI) is. We need to realise that there is no AI. There is DML (Deeper Machine Learning) and there is LLM (Large Language Models) and they are awesome, but they are depending on the programmers you throw at them and it is not foolproof, there are issues (as you can see). 

This is not a large article. I have said it before and now within 5 minutes I had the setting I needed. I reckon that all of you want to make a separate ‘judgment’ on whatever these people call AI and whether it might show your local environment in a limelight you could check. And just for fun (I tend to be a whacky person) I am adding the ‘American Tourism decline’ here too.

Just to set the premise, consider that this was given 4 weeks ago: “In June, Canadian residents returned from 2.1 million trips to the United States, representing a 28.7% decrease from the same month in 2024 and accounting for 70.8% of all trips abroad taken by Canadian residents in June 2025.” And the story here becomes verification. You see, who (or what) is feeding the AI models? When the data cannot be verified, how is the data conceived? Because this data is fed, by whom becomes the story and the media (as a whole) becomes less and less reliable. 

Have a great day, almost time for me to take a walk towards my brekky.

Leave a comment

Filed under IT, Media, Politics, Tourism

Act of despair

That happens at times and I reckon that at some point I will have to give in to that setting as well. It started this morning when I was advised that I might have cancer, it might be benign, the biopsy will be done over the next week, then they know what they have. I was unusually cool about it all. As such as a friend of mine was ‘culled’ by the big C (a curry billboard shattered his skull), I can confirm that my weird sense of humor has not been devastatingly impacted at present.

So I have two ideas on my mind. The first one is that Peter Jackson (director Lord of the Rings) still owes me $17.50 He owes me that amount from 1992. But the other one is the one that matters to me. For that we need a small sidestep towards the article that Fortune gave us (at https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/) where we see ‘MIT report: 95% of generative AI pilots at companies are failing’, it is here where we see “Despite the rush to integrate powerful new models, about 5% of AI pilot programs achieve rapid revenue acceleration; the vast majority stall, delivering little to no measurable impact on P&L. The research—based on 150 interviews with leaders, a survey of 350 employees, and an analysis of 300 public AI deployments—paints a clear divide between success stories and stalled projects.” The report is two weeks old, but today I had a reason to tag it, it affects my future and as I see it, it impacts it in a positive way. As such the second quote doesn’t quite get us there, but there is an offset. It is seen in “for 95% of companies in the dataset, generative AI implementation is falling short. “The 95% failure rate for enterprise AI solutions represents the clearest manifestation of the GenAI Divide,” the report states. The core issue? Not the quality of the AI models, but the “learning gap” for both tools and organizations. While executives often blame regulation or model performance, MIT’s research points to flawed enterprise integration. Generic tools like ChatGPT excel for individuals because of their flexibility, but they stall in enterprise use since they don’t learn from or adapt to workflows, Challapally explained.” The part missing is data and verification. WE can look for other articles where we see the failures of AI. But the largest setting is never discussed. What we call AI isn’t it, they mess around with “GenAI”, they package it like it is a new version of “generative AI” but in the end it is merely DML with optional LLM in place. It is as I call it “Near Intelligent Parsing” parsing because it is existing data, it cannot leap on non existing data and the setting we see are basically a little more than predictive analytics. It is a next step.

So why is this important?
Well, for me there is a side that has worked in Technical support and customer care for nearly two decades. And as I see it, the quality people who need to act will see it. As such I think that Lawrence Ellison (Oracle) can see the light he is currently coping with. Large customers will need their technical support, their customer care and here I am ‘sneakily’ asking him for 10 million (post taxation) out of his two hundred fifty thousand million (aka $250 Billion) stockpile. Seems like the smallest of amounts. Oh, and I pride myself on being a return on investment I have proclaimed for the length of my working career going all the way back to 1982. That is 43 years of experience (twenty in technical support) and I have none in Oracle. But I know that support settings that any companies have. And Oracle will need these people soon enough. Wherever he wants to send me, it is almost fine by me. As I see it no one wants to work in Russia and America is a big no no (its a Trump card). But the UAE (ADNOC) and Saudi Arabia (ARAMCO) do make the list. And Oracle needs these large companies and especially support staff in these locations. Personally the UAE wins, but it is what Oracle needs and I am willing to move to Canberra at the earliest settings. We seen to be at an influx where the governments and large corporations need manpower. Microsoft and Amazon need to learn this and whilst they falter, Microsoft is shedding 9000 people and investing in AI, but when you consider that 95% falters, you can imagine when these systems fall short, all whilst at that same time, Windows seemingly lost 400 million users in the past three years. Do you think this is coincidence? Yes they can clean some up with NIP, but they will fill larger holes in that meantime and losing people in the process. Google and Amazon are on that same setting. But Oracle is too complex. As I see it, it needs staff in the near future and I am betting that they cannot afford to lose the manpower and I am willing to bet that as they take over clients from AWS and Azure (the latter especially) they will need more people and that’s where I come in. Not merely tech support staff, but as a trainer having made my brand of training people, I am willing to bet that Oracle might have a place for me (even a flake like me).

I have always stood my setting in this and after a long time I am proven correctly and the next generation is largely unable to deal with the support pressures and that works for me in places like ADNOC. So I believe that Oracle might be my solution towards a few settings that never worked for me. And there is something less like-able about forced to hand my IP to Microsoft whilst receiving a mere 0.001 on the dollar. I might given it away in other ways (to others) if Oracle shows to be my ‘knight on a white horse’ and there is something satisfying on that setting. I get to see Microsoft lose thrice over. 

As such those with an affinity with technical support to consider the places they can flock to. I gave some of my IP to Elon Musk (Musk already owed the ideas anyway), and I keep on fueling gaming IP to other channels too (non Microsoft systems) and there the Amazon Luna has options too. Still the news from this morning (even as it doesn’t hit me hard) it made me see that I have to put my affairs in order and one of them is to deny Microsoft my IP.

And there is a second setting, as Google and Microsoft are shedding people, the larger companies need to scoop them up quickly, because internationally these people will be wanted rather quickly. For Americans there is Canada as a first, but do you think they will spread their wings to other nations? Time will tell, but as I see it 2025/2026 will be the year where we all consider the stage of the brain drain. And take that with faltering AI projects, the turn of of places suddenly being short on tech support will falter massively and as we know: “no support, no sales” a nice catch phrase, but their AI will tell them at some point (one might hope).

So have a great day and I will ponder what will become of me when the biopsy doesn’t show a benign setting. 

Leave a comment

Filed under Finance, IT, Science

Microsoft in the middle

Well, that is the setting we are given however, it is time to give them some relief. It isn’t just Microsoft, Google and all other peddlers handing over AI like it is a decent brand are involved. So the BBC article (at https://www.bbc.com/news/articles/c24zdel5j18o) giving us ‘Microsoft boss troubled by rise in reports of ‘AI psychosis’’ Is a little warped. First things first. What is Psychosis? Psychosis is a setting where we are given “Psychosis refers to a collection of symptoms that affect the mind, where there has been some loss of contact with reality. During an episode of psychosis, a person’s thoughts and perceptions are disrupted and they may have difficulty recognizing what is real and what is not.” Basically the settings most influencers like to live by. Many do this already for for the record. The media does this too.

As such people are losing grips with reality. So as we see the malleable setting that what we see is not real, we get the next setting. As people lived by the rule of “I’ll believe it when I see it” for decades, this is becomes a shifty setting. So whilst people want to ‘blame’ Microsoft for this, as I see it, the use of NIP (Near Intelligent Parsing) is getting a larger setting. Adobe, Google, Amazon. They are all equally guilty.

So as we wonder how far the media takes this?

I’ll say, this far.

But back to the article. The article also gives us “In a series of posts on X, he wrote that “seemingly conscious AI” – AI tools which give the appearance of being sentient – are keeping him “awake at night” and said they have societal impact even though the technology is not conscious in any human definition of the term.” I respond that giving any IT technology a level 8 question (user level) and it responds like it is casually true, it isn’t. It comes from my mindset that states if sarcasm bounces back, it becomes irony.

So whilst we see that setting in ““There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote. Related to this is the rise of a new condition called “AI psychosis”: a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.” It is kinda true, but the most imaginative setting of the use of Grok tends to be 

I reckon we are safe for a few more years. And whilst we pour over the essentials of TRUE AI, we tend to have at least two decades and even then only the really big players can offered it, as such there is a chance the first REAL AI will respond with “我們可以為您提供什麼協助?” As I see it, we are safe for the rest of my life.

So whilst we consider “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.” Consider that law shops and most advocacies give initial free advice, they want to ascertain if it pays to go that way for them. So whilst we are given that it doesn’t pay, a real barrister will see that this is either lawless, trivial or too hard to prove. And he will give you that answer. And that is the reality of things. Considering that ChatGPT is any kind of solution makes you eligible for the Darwin award. It is harsh, but that is the setting we are now in. It is the reality of things that matter and that is not on any of these handlers of AI (as they call it). And I have written about AI several times, so it it didn’t stick, its on you.

Have a great day and don’t let the rain bother you, just fire whomever in media told you it was gonna rain and get a better result.

Leave a comment

Filed under IT, Media, Science

A speculative nightmare for some

That is the setting I just ‘woke’ up from. A fair warning that this is all PURE speculation. There are no hidden traps, there is no revelation at the end. All this is speculation. 

You see, some will recall the builder.ai setting and there we see “Builder.ai was a smartphone application development company which claimed to use AI to massively speed up app development. The company was based mostly in the United Kingdom and the United States, with smaller subsidiaries in Singapore and India.” At this time we are given “The real catalyst wasn’t technical failure — it was financial mismanagement. According to reports, Builder.ai was involved in a round-trip billing scheme with one of its partners. Essentially, they were allegedly booking fake revenue to make the business look healthier than it was.” And the fact that Microsoft was duped here makes it hilarious. But was it? You see, as I see it AI doesn’t exist (not yet at least) so this setting didn’t make sense, it still doesn’t. Apart from the fact that there were 700 engineers involved (which made the setting weird t say the least) and that was set in a larger space. But what if there was no ‘loss’ for Microsoft? What if builder did exactly hat was required of them? When I got that thought, another beeped up. What if this setting was a mere pilot? You see, there are data issues (all over the place) and Microsoft knows this. What if these 700 engineers were setting the larger premise. What if this is the premise that Sam Altman needs? What if the enablement the is caused between Sam Altman and Satya Nadella and their needs? What if that setting isn’t merely data, but programmers? What if OpenAI is capturing all the work created by programmers? You see, data can be collected, capturing the work of programmers is a little different and OpenAI gets at present “OpenAI is set to hit 700 million weekly active users for ChatGPT this week”, as far as I can tell 90% is simple rubbish, but that 10% are setting their fingerprints on the programming of the future. And whilst this is going on, the ChatGPT funnels are working overtime. As such these programers are pushing themselves out of a job (well not exactly) they still have jobs in several places, but the winners here is team Altman/Nadella. They are about to clean house and when the bulk of the programmers is captured, automated program settings are realised. It isn’t AI, but the people will treat it as much. And this setting is really brilliant. We all contributed to a new version of Near Intelligent Parsing. One that has the frontlines of the crowds, millions of them. And no-one is the wiser as such. 

Perhaps some are and they do not care. They will have their own partitions on this all and the setting will regurgitate their logic and as such they will be the cash makers in the house. So, we are pricing ourselves out of a jobs, out of many jobs. But as I said, this is merely speculative and I have no evidence of any kind. Yet this was the setting I see coming.

Now, let see if I can dream lovely dreams involving a lovely lady, not an Grok imaginative lady of the night. You know what I mean, Twitter is filled with them at present. 

Have a great day, it’s 5:00 in the morning in Vancouver, I’m almost seeing Monday morning, less than 2 hours to go.

Leave a comment

Filed under Finance, IT, Media, Science

Inspiring the young

That is the setting we need to move towards and that moment will be now. It started with a simple setting, the map of Europe and the alleged accusation is seen below.

I cannot vouch for the setting, but as you see, in most languages it makes little sense. So when any AI fumbles (and that WILL happen) the ball the damage will be a lot bigger. We hear all these ‘delusional denials’ like ‘We will prepare for that’ and ‘it can’t happen to us’ you merely need to look at the Builder.ai setting and how they used 700 engineers to allegedly ‘fool’ Microsoft who backed it to the extent of a billion dollar plus. So when the ‘bigger’ players also get caught with their pants on their ankles we will have a totally new setting. As such I thought of going back to the roots of technology. Optionally as an educational setting, an optional simulator to inspire the youn to think and become creative for themselves without any AI system fumble their thinking patterns. It might not be the most eloquent setting, but creativity cannot be set in AI, as AI doesn’t exist (yet) and before it is too late, we need to create other outlets for creativity to emerge. I still like the setting that Ubisoft gave us with Assassins Creed Origins. In one of the expansions you are taken to the Tours: Beer & Bread. It shows that Egyptians ‘perfected’ the fermentation process. In my youth (a very long time ago) I went to the Open-air Museum in Arnhem (Netherlands) and this one building still reverberates in my mind over half a century later. It was a paper mill. 

On the outside it doesn’t seem like much, a lot like a really old building, but that is the hidden part. Inside there is a completely operational paper mill and it is fueled by waterpower. Now you might think that this is too old. 

But consider that Nobel invents Dynamite for the simple need of mining, Apparently Viagra had a completely different stage. It takes one mind to think “What if we did this?” and that is the ball game. That is the setting that creates new technologies. We need to get back to the old ways. And I use the paper mill as an example. Consider the Amish (all over America) who have been doing it there way for centuries. Consider how they have no fridges, or non electrical ones. We need to reconsider what we know and what is possible without some idiot telling us how to do it, because these people will come out of the woodworks pretending to voice the deities they pretend to follow (for their personal good). 

Consider that paper mill and what to do when water stops flowing. A wind vane? Giving people the idea to take the next step. And at some point power will become an issue. We see now new ways to tarmac roads making them safer, the Netherlands are exploring illuminating forms of tarmac, making electricity less of a essential need. We see all kinds of innovations and as you think it is all covered, consider that in Australia ‘relied’ on ChatGPT (as one source stated) to phrase the law and it used non-existing cases. So how do you like your chestnuts boiled in that gravy? 

The one option is to revert to earlier settings and consider what is possible without others telling us what to do. A lot will not work, but some will be true innovative steps. And that is the ballgame. As what some call AI is telling us where to go and especially where not to go we lose the creativity we have, or merely fashion it in the way other want it to be fashioned. 

That is not innovation, that is pack mentality. 

So what stages in other fields were short cut, because it never supported the then innovative choosers? We need to protect ourselves and the evidence is all over the historical buildings. The romans had two tiered bathhouses making hot water. So even as we now think that we do better, consider what happens when electricity falls away because 500,000 systems took it away fueling their AI systems taking over 250,000 times more energy than one simple brain does. 

We need to protect what is and what was, before others remove that way of thinking from us and we can go about it in different ways, I ikon that none of them are incorrect. Another example can be seen in the old pyramids. We were given (in YouTube) “Ancient Egyptian “pyramid basalt roads” refer to a network of paved roads, including the world’s oldest known paved road, that connected basalt quarries in the Fayum region to the pyramid fields like Giza. These roads, often paved with sandstone, limestone, and even petrified wood, were used to transport massive basalt blocks, likely for paving the pyramid complexes and temples. One significant road, leading from the Widan el-Faras quarry to the shores of a now-vanished lake, represents a major engineering feat from the Old Kingdom period.” I don’t believe the hype behind it, but these roads and pavements are massive undertakings that even today are unlikely to be this perfect, apart from the settings that they seemingly lacked the tools to create these slabs and make them fit this perfectly. I am not all onboard of this, but like the Game of thrones ‘Wildfire’ we see that this reflects on what was Greek Fire and it came from Byzantine. “With the decline of the Byzantine Empire, their recipe for the production of liquid fire was lost, the last documented use of Byzantine fire was in 1187. After Constantinople fell to the Ottomans, several attempts to imitate the Greek Fire were made, but none replicated the original.” So something created 1000 years ago can no longer be reproduced? I reckon that this is one of the most direct forms of creativity lost. And the fact that it has military applications implies that plenty of governments tried to get it on their side.

As such I think we need to create genuine systems to invoke creativity in the next generation before it is all lost and we all go ‘Duh!’ At the next innovation blaming it on magic and as Vernon Dursley once said “there is no such thing as magic” as I see it, magic is blamed when we no longer comprehend the technology (like the White House and 5G technology, which comes with a small giggle from me).

So the short setting is Protect the next generation now as there is no longer any later.

Have a great day.

Leave a comment

Filed under Gaming, IT, Media, Science, Tourism

By German standards

That is at time the saying, it isn’t always ‘meant’ in a positive sight and it is for you to decide what it is now. The Deutsche Welle gave me yesterday an article that made me pause. It was in part what I have been saying all along. This doesn’t mean it is therefor true, but I feel that the tone of the article matches my settings. The article (at https://www.dw.com/en/german-police-expands-use-of-palantir-surveillance-software/a-73497117) giving us ‘German police expands use of Palantir surveillance software’ doesn’t seem too interesting for anyone but the local population in Germany. But that would be erroneous. You see, if this works in Germany other nations will be eager to step in. I reckon that The Dutch police might be hopping to get involved from the earliest notion. The British and a few others will see the benefit. Yet, what am I referring to?

It sounds that there is more and there is. The article’s byline gives us the goods. The quote is “Police and spy agencies are keen to combat criminality and terrorism with artificial intelligence. But critics say the CIA-funded Palantir surveillance software enables “predictive policing.”” It is the second part that gives the goods. “predictive policing” is the term used here and it supports my thoughts from the very beginning (at least 2 years ago). You see, AI doesn’t exist. What there is (DML and LLM) are tools, really good tools, but it isn’t AI. And it is the setting of ‘predictive’ that takes the cake. You see, at present AI cannot make real jumps, cannot think things through. It is ‘hindered’ by the data it has and that is why at present its track record is not that great. And there are elements all out there, there is the famous Australian case where “Australian lawyer caught using ChatGPT filed court documents referencing ‘non-existent’ cases” there is the simple setting where an actor was claimed to have been in a movie before he was born and the lists goes on. You see, AI is novel, new and players can use AI towards the blame game. With DML the blame goes to the programmer. And as I personally see “predictive policing” is the simple setting that any reference is made when it has already happened. In layman’s terms. Get a bank robber trained in grand theft auto, the AI will not see him as he has never done this. The AI goes looking in the wrong corner of the database and it will not find anything. It is likely he can only get away with this once and the AI in the meantime will accuse any GTA persona that fits the description. 

So why this?
The simple truth is that the Palantir solution will safe resources and that is in play. Police forces all over Europe are stretched thin and they (almost desperately) need this solution. It comes with a hidden setting that all data requires verification. DW also gives us “The hacker association Chaos Computer Club supports the constitutional complaint against Bavaria. Its spokesperson, Constanze Kurz, spoke of a “Palantir dragnet investigation” in which police were linking separately stored data for very different purposes than those originally intended.” I cannot disagree (mainly because I don’t know enough) but it seems correct. This doesn’t mean that it is wrong, but there are issues with verification and with the stage of how the data was acquired. Acquired data doesn’t mean wrong data, but it does leave the user with optional wrong connections to what the data is seeing and what the sight is based on. This requires a little explanation.

Lets take two examples
In example one we have a peoples database and phone records. They can be matched so that we have links.

Here we have a customer database. It is a cumulative phonebook. All the numbers from when Herr Gothenburg got his fixed line connection with the first phone provider until today, as such we have multiple entries for every person, in addition to this is the second setting that their mobiles are also registered. As such the first person moved at some point and he either has two mobiles, or he changed mobile provider. The second person has two entries (seemingly all the same) and person moved to another address and as such he got a new fixed line and he has one mobile. It seems straight forward, but there is a snag (there always is). The snag is that entry errors are made and there is no real verification, this is implied with customer 2, the other option is that this was a woman and she got married, as such she had a name change and that is not shown here. The additional issue is that Müller (miller), is shared by around 700,000 people in Germany. So there is a likelihood that wrongly matched names are found in that database. The larger issue is that these lists are mainly ‘human’ checked and as such they will have errors. Something as simple as a phonebook will have its issues. 

Then we get the second database which is a list of fixed line connections, the place where they are connected and which provider. So we get additional errors introduced for example, customer 2 is seemingly assumed to be a woman who got married and had her name changed. When was that, in addition there is a location change, something that the first database does not support as well as she changed her fixed line to another provider. So we have 5 issues in this small list and this is merely from 8 connected records. Now, DML can be programmed to see through most of this and that is fine. DML is awesome. But consider what some called AI and it is done on unverified (read: error prone) records. It becomes a mess really fast and it will lead to wrong connections and optionally innocent people will suddenly get a request to ‘correct’ what was never correctly interpreted. 

As such we get a darker taint of “predictive policing” and the term that will come to all is “Guilty until proven innocent” a term we never accepted and one that comes with hidden flaws all over the field. Constanze Kurz makes a few additional setting, settings which I can understand, but also hindered with my lack of localised knowledge. In addition we are given “One of these was the attack on the Israeli consulate in Munich in September 2024. The deputy chairman of the Police Union, Alexander Poitz, explained that automated data analysis made it possible to identify certain perpetrators’ movements and provide officers with accurate conclusions about their planned actions.” It is possible and likely that this happens and there are intentional settings that will aide, optionally a lot quicker than not using Palantir. And Palantir can crunch data 24:7 that is the hidden gem in this. I personally fear that unless an accent to verification is made, the danger becomes that this solution becomes a lot less reliable. On the other hand data can be crushed whilst the police force is snoring the darkness away and they get a fresh start with results in their inbox. There is no doubt that this is the gain for the local police force and that is good (to some degree). As long as everyone accepts and realizes that “predictive policing” comes with soft spots and unverifiable problems and I merely am looking at the easiest setting. Add car rental data with errors from handwritings and you have a much larger problem. Add the risk of a stolen or forged drivers license and “predictive policing” becomes the achilles heel that the police wasn’t ready for and with that this solution will give the wrong connections, or worse not give any connection at all. Still, Palantir is likely to be a solution, if it is properly aligned with its strengths and weaknesses. As I personally see it, this is one setting where the SWOT solution applies. Strengths, Weaknesses, Opportunities, and Threats are the settings any Palantir solution needs and as I personally see it, Weakness and Threats require its own scenario in assessing. Politicians are likely to focus on Strength and Opportunity and diminish the danger that these other two elements bring. Even as DW gives us “an appeal for politicians to stop the use of the software in Germany was signed by more than 264,000 people within a week, as of July 30.” Yet if 225,000 of these signatures are ‘career criminals’ Germany is nowhere at present. 

Have a great day. People in Vancouver are starting their Tuesday breakfast and I am now a mere 25 minutes from Wednesday.

Leave a comment

Filed under IT, Law, Media, Politics, Science

Saudization

A term I got introduced to last week. It stands for “the Saudi nationalization scheme and also known as Nitaqat, is a policy that is implemented in the Kingdom of Saudi Arabia by the Ministry of Labor and Social Development, which requires companies and enterprises to fill their workforce with Saudi nationals up to certain levels” I think it is a great idea. I think more countries need to embrace such a scheme for a few reasons. I believe it is essential that skills are moved locally to avoid being at the massive risk of an American need and that is a bad idea on a few levels. Now, this is not an anti-America sentiment, but the media (America too) have left us with the notion that we cannot be certain of almost anything and there is the larger setting that it goes to other countries too. Perhaps there is an Emiratization, an optional Indonesization (these two words might not exist) and several others (Pakistan, Bangladesh) and so on. So why is there not an open video channel with options on both YouTube and TikTok handing these skills? If I merely push this to myself. There is the option to train people (non-Arabic) in IBM Statistics (formerly known as SPSS) I trained people for over a decade and that is a skill that can be taught. Edit the movie with a localised soundtrack and you have a solution to optionally train dozens of people.

If we create a few hundred videos we could optionally train a whole legion of people and as the elder generation (including me) could leave a footprint handing this knowledge out to others we continue training people after we are gone. I also worked in call centers and whilst the world is filled with silliness and chases after AI, the skills that are out there will be lost soon enough. As such we (read: some)  need to create the stages for the next generation. Whilst all are on the AI train we might see a setting of dwindling down sources and in a decade when AI misses its target the world will suddenly see that they lost more than they bargained for. As such a video station that allows Saudization to grow into the people who cannot see what they need and can freely learn to grow their own future is a proper way to harvest talents where they freely grow.

So you might think that this comes for free and that might be the case. Yet the older generations feels that they can contribute to any setting that will listen. As such these skills will require verification so that quality will prevail. Yet is it such a hardship on the older generation? They contribute to all kinds of non profit organisations. Is it so hard to believe that they would assist in creating the future generations? The world is not what big corporations believe it to be, it is what the next generation wants it to be and as such this idea stands a chance. In the setting we see now it might benefit Saudi Arabia. Yet when these movies get a larger setting in Pakistan, Bangladesh, Uruguay and other places, we grow the knowledge in all kinds of directions and as it should be offered free knowledge will emboss all people, not just the ones who can afford it. 

It is just a little idea I am playing with, but I reckon that some governments will embrace what hundreds of people could contribute to their national causes.

Have a great day

Leave a comment

Filed under Politics, Science