Tag Archives: technology

In my mind

This is not some setting from “because I say so”, but it is a setting that I expect. To see what trinary systems (some call them ternary systems) can do ‘because of’ “Potentially higher speed and efficiency, allowing for less storage space per bit and more compact circuitry. Balanced ternary handles negative numbers natively.” The IT world is relying on the setting (because it knocks them of their throne) with “Difficult to design, higher power consumption in some implementations, and a lack of mature research compared to binary.” They are not wrong, but trinary is tomorrow and it is set to actual and factual true Artificial Intelligence. As such in my mind the system created 1,000,000 possible culprits, but the setting to identify this (with much in the middle of the data) we see a cube with a 100 layers of 100 by 100 people. Each person has over 100 elements and that is still a decent data setting. The binary solution gives us 4 reds (highly likely culprits), two dozen orange (people who are not to be dismissed as a suspect at present) and the rest is cleared, it took the binary solution 47 seconds. So in comes the trinary solution and it gives us two reds and 5 orange and it does so in 6 seconds. That is the setting that trinary beers binary gives us on 1,000,000 people. So when (lets for arguments sake say Oracle) gives the people the impact of that and the gain in computational power as well as offer higher information density and theoretical efficiency. The sales talk is done at that point and consider the amounts of data sources have, we can say that at that point Binary solutions are done for in a world where time matters and where efficiency is goal. You should not dismiss Fake AI that easily, because some people cannot afford trinary solutions before 2040-2050. But that setting if computational power is not to be dismissed. No matter what the binary tycoons claim. So in 6 seconds, the 19 non-dismissible people were disregarded on the foundation of the SAME data, because that was part of the exercise. And I reckon that shallow circuits will be a much stronger solution in a trinary setting that it ever could be in a binary setting. Don’t get me wrong, it will help heaps. 1 million people with over 100 elements is still 100,000,000 settings in a true/false environment. This is why I disregard (at present) as all AI, simply as fake AI. And for the people stating this is merely in my mind. You are right and fortunately I had an education from UTS and a degree in internet working. So we all have had that setting of data and non-repudiation. And don’t forget in a trinary setting non-repudiation is more than a simple equation. It will figure out that you and only you could have done something like that. This is why I valued Oracle (and optionally Snowflake) above all the others, by the time you are done with listening to the salespeople from Azure stating that this is the way to go, you are hooked and that is where you lose the fight. And when Oracle set up whatever they call there trinary database system, there will be a population of one in the forefront of real AI and those who were ‘enticed’ by the sales talk of others, because those salespeople don’t care about you, they care about their own product and they are set to do the best that their solution can do for you. Here language and legal settings matter, because they never outspokenly lie, they merely omit factors that they regard don’t concern you. Even Google Gemini give us “ternary remains limited by manufacturing complexity and lower reliability.” Every one who knows me knows that I am a huge Google fan, so where did Gemini gets that data? (simple: reddit) and it gives the source, but how was it verified and validated? And at present it is a true setting, but if you realise that this technology is still well over a decade away (at best) are they lying? You need to see the bigger picture, especially when these vendors trow phrases like “AI” around and when people are cluing up that it is all Fake AI, we will see carefully phrased denials like “they were all doing it, we just followed them” and that is where you see that these proclaimers are merely following one another. What a tangled web we weave. 

Still I reckon that Snowflake and Oracle will have transference systems in development, because I am not the ‘genius of one innovator’ others have similar setting in mind and they are preparing to give their customers the best that is possible with the current technology in place. 

So as we are looking at a day of rest (or like me slaughtering people in Skyrim), we need to consider the media frenzy that is evolving around us and be very careful what you accept as true. Even my statements should be examined. The one stating “My data is without flaw” is the liar in your inner circle. And be careful who you let into your inner circle because that is your decision and it will cost you the moment you allow the wrong person in your midst. 

Have a great day. So don’t think of this ‘article or story’ as valid, it is a collection of thoughts that are mine and even as I presume that it is all factual, it remains a story unless I can verify and validate the data I have and some of this was collected through fake AI, so I know there are parts that are not aligning. 

Leave a comment

Filed under IT, Media, Science

The idea that it gave me

I saw a Vodafone advertisement today and it suddenly gave me an idea that (preferably) Google could use to create a novel setting. 

Now the prices on it doesn’t matter, but it is the idea of an MicroSD card with an embedded sim. The sim will be the encryption setting and optionally it could be used to read the sim settings to create an eSim. An Micro SDCard that encrypts whatever is put on it and only readable on THAT phone. There should be an option to decrypt it to another source, but it is up to the user to do that. An enhanced encryption setting from the moment it is inserted and as that Sim is not ‘connected’ the hacker will have its hands full trying to get to the goods. There is no doubt some will get through it, but as I see it Hackers are set in 4 levels. 

Government is the top level, they tend to ‘resolve’ whatever encryption is thrown at them, the are also on a ‘boy-scout’ level like NSA, GCHQ and so forth. And as I see it 99.99% of people are ‘safe’ from them. These players have no interest in you, if they do you are likely a very naughty individual. Then we get the cream of the crop in hackers. They exist for all kinds of reasons and they are highly paid, the question is, what do you have for them to take an interest? I set them in Orange, because this solution might mostly stop them, but equally they are the likely second to break through this encryption (or circumvent it), the others are the average and dodo hackers are they out of play and if they succeed getting through your encryption, you did something wrong. There is no real number of hackers, but if we set this to 250,000 hackers then the top two tears are basically set to a sculpted maximum of 2,500 and 2,450 have too much work and usually no interest in you. As such this solution might be an option for the next Google Pixel (11 or 12) encryption that also sets a larger stage of non-repudiation and as I see it, with all the fake AI out there, these settings require a much harder playing field and that might be through simmed SD Cards. 

And as an Android fan, I think Google should entertain that thought for their next Pixel, to be honest other Android players might consider this and as I see it Huawei might also take a gander to this, because this solution could benefit their Matebook as I see it. 

So, this might be a little tech of field and seen as useless to these captains of industry, but I had a thought and I put it to my blog and perhaps others will see this as an idea that merits consideration. 

All is fair in love and technical innovation and this was my brainwave today. I wonder what else I can think of this weekend. I can’t wait. Have a great day, almost time for lunch for me.

Leave a comment

Filed under IT, Media, Science

They have what?

Yes, that is the news we get mere hours ago ‘Aramco and stc to deploy supercomputer in Saudi Arabia’, these puppies do not grow on trees and there aren’t many of them. It’s almost the same as a country is added to the nuclear arsenal. A supercomputer is a big deal and in this case it will increase the computing abilities for over 700%, that is a lot and Aramco is seemingly sharing that ability with the stc (Saudi Telecom Company) and it isn’t entirely unexpected as we were told that this would happen in the end of March. Where we saw “solutions by stc had signed a SAR 1.4 billion (~ US$ 372 million) agreement with Saudi Aramco to provide advanced high-performance computing (HPC) infrastructure to support operations in the energy exploration and production sector.” And here we see that the Kingdom of Saudi Arabia is taking data exploration in the energy sector very seriously and it would enable growth of this sector could enable this US$ 372 million investment in a return of billions annually. As the expression goes it will have an interesting return on investment and I reckon that this also goes for the Saudi Telecom sector and this could assist the Kingdom in all manners from large to small. It is the benefit of having your own supercomputer and it is apparently not the first one, they already have 7, as such in the ‘rankings’ of these bad boys the Kingdom would increase to a 12th position on the global ranking list. They won’t outdo the United States who allegedly has 171 of these data devourers, but that is still a standing that will help Saudi Arabia to crunch a whole range of numbers and I reckon that it is one of the very few in the energy sector, as such they will likely have a massive advantage, because as they have had a stable partnership with IBM, they will soon have the means to crush decades of data in mere minutes. It also beckons the thought of what benefits it could bring to the stc, as data mining in the telecom groups is pretty novel. Yes, we get that telecom groups (globally) use supercomputers to see how their investment holds, but there aren’t many to have direct access to one. The top500 list doesn’t specify what or how they are used, but with Saudi Arabia soon in 12th position, they likely have a few options they dod not have before and to get the output of data crunches in no more than minutes is the beginning of a few settings that have strategic benefits and as I see it, their exploration of a muslim customer base in the surrounding African nations will reap benefits for stc as well. To get the output of ‘What can we do now’ not set in weeks, or even days but in mere hours (creating the dashboard is likely to most intense part here) is not to be overlooked. I reckon that overseeing the refinery benefits now for Aramco will be the first expected setting, because that is where a mere 4 billion per percentage increase is seen and that system (aka doohickey) will enable this with all the data it has access to in mere minutes. So, the upcoming OPEC Monthly Reports should no later then December 14th this year be showing us all a nice upgrade of the abilities of Aramco. An advantage like that will stir the emotions of places like Wall Street nicely and whilst some will trivialize what this will contain, the setting of decades of IBM data and the computer power that is added leave me with no worry of what Aramco could be achieving in 2026. 

Have a great day, it Tuesday now for me now, so enjoy whatever day you are in (only New Zealand is ahead of me in time). 

Leave a comment

Filed under Finance, IT, Media, Science

By the numbers

There is an old ‘saying’, it comes from the late 70’s and it goes a little like this:

In the 50 years that followed we learned that the first option might be the prettiest, but you still end up with a working company. The second one is still an issue, but the third one is still under consideration, Especially with the presumed setting of AI (or as I call is NIP or Fake AI.

This all came to me when I was bombarded with charts and there are numerous ways that we are handed these charts, but it also gave me a consideration. You see, no matter how deep you believe the data to be true, it remains a consideration that any data is flawed and through that setting not entirely trustworthy. 

You see, this is the country with the most migrants, but what I am missing is where they came from. I saw another article in the BBC, which gave us ‘La dolce vita: Is Italy the new tax haven for the global rich?’ (at https://www.bbc.com/worklife/article/20260421-is-italy-the-new-tax-haven-for-the-global-rich)here we see “In France you also have to pay a property tax (taxe foncière or land tax). “We don’t have that here for the prima casa (first home),” says Robert, although he notes “there is a high charge for refuse collection”. The best thing as far as he is concerned is that there is no inheritance tax on property you own in Italy up to €1 million ($1.1 million) and it’s only 4% beyond that threshold. In France the tax-free limit is much lower – €100,000 ($110,000) – and beyond that it’s a sliding scale up to a top rate of 45%.” The story is about the ‘global rich’? All this might be true, but I believe that there is a larger migration into Europe. The setting that Americans are leaving, a setting we got in the Wall Street Journal on February 25th 2026, where we saw “The U.S. experienced net negative migration in 2025, with an estimated loss of 150,000 people, a trend not seen since the Great Depression.” And if you are ‘really wealthy’, you skip Italy and go straight to Monaco, which is a zero tax nation. So that first chart is nice, but where they came from is more interesting, especially in the era 2026-2028. 

We then get the second chart, which shows us where the youth is scientifically. Here we get the first issue. There is consideration that these numbers are flawed n some cases. As some give us: “There are approximately 1.2 billion young people aged 15 to 24 globally”, and I know enough of the failing of data, to give you the fact that there are no data sets giving us 1.2 billion records. As such plenty of nations have worked with mean values and that is the first failing on that chart. Second it is nice to see the USA in 17th position, but they have a population of 349 million and not all can afford to go to University, then we get foreign students in MIT, UCLA,
Princeton, Harvard and Yale. So how are they counted and what is disregarded? Several questions on a chart because the data is missing (and footnotes too). So whilst these numbers might be indicative that those scoring over 500 are in a ‘safe’ place, but that is if we accept this number. And the explanation of those scores, with added footnotes on what is regarded as ‘valid’ is up for grabs. 

And then we get the main event, the one that baffled me for a moment, because is gave my thoughts optional validity, but then I need to be wary of a few settings, because without data, a chart is merely a weighted result and without N (total responses) there are reliability issues. 

We now see the top countries by natural resource value. It gives me my validity as the United States is show to have $45T in value and that is the setting that makes them optionally almost insolvent. Their debt is growing faster and faster and as it is now said to be $38.9 trillion, which amounts to exceeding 100% of their Gross Domestic Product (GDP), but as we see it, they have almost spend the total of their natural resources. I have an issue with that, because the rare metals are not in that list all whilst Wyoming, Utah, Colorado, New Mexico, and Arizona have it, as such that number is off (by a lot) and other nations have more (or less) natural numbers as the chart sets out, all whilst these numbers are not given either as such it is a nice chart, but incomplete and as such redundant. If I was to hazard a guess, this was a chart to show how ‘good’ Russia is doing, but as I never saw data on it all as such I have my issues with it. All charts look pretty cool, but cool doesn’t pay the baker (or the butcher for that matter). As such we need to wonder what the chart was doing, not what they tell you, but what they aren’t telling you.

That was just my setting on this and there is a lot more to consider so whilst the first chart gave us “The U.S. hosts 17% of the world’s migrants”, my initial question was “Based on what data?” And as people m ight give us the setting that the AI gave them the numbers and we know that AI doesn’t yet exist. We are given the thought that it is merely DML and that is done by a programmer and that programmer might miss a few beats to be optimistic (many more beat are likely to have been missed) and all this on flawed data? 

So what was the designer of that chart trying to persuade you to consider what was ‘their’ issue? Because when someone makes a chart, they want you to look into a specific area, or not look in an area that also mattered. Have a great day, another Monday parked on front of my door, Vancouver still has the bulk of Sunday to get through. Ah well.

Leave a comment

Filed under Finance, IT, Media, Science

Label negativity

That is the setting and I almost fell into this. I have lived by the fact that all AI is fake AI and I still believe this, just like some believe that Donald Trump cannot say an intelligent word ever, that is just the beginning, but it is all about me now. I do believe that all AI is fake AI and as such, I almost ignored news from IBM given to us on May 5th. The article ‘IBM and Aramco Explore Collaboration to Accelerate AI and Innovation Across Saudi Arabia’ (at https://newsroom.ibm.com/2026-05-05-ibm-and-aramco-explore-collaboration-to-accelerate-ai-and-innovation-across-saudi-arabia) sounds like a joke. But when you consider that AI is DML (deeper machine learning) and LLM, some say that Machine Learning (ML) is enough, but why settle for half baked? And consider that IBM has been working with Aramco since 1947 as such they have data, decades of data, as such we might frown at the words by Sami Al Ajmi, Senior Vice President at Aramco “Technology and innovation are central to Aramco’s long-term strategy. This collaboration with IBM enables us to assess how industrial AI and other mutually-agreed domains can further enhance operational excellence and resilience, while reinforcing our leadership in Industrial AI—particularly in reliability, safety, and mission-critical environments.” But when you think of it, it is a NIP methodology with near 98% data efficiency and upholstery error checking and whatever you might think of NIP think, the setting with reliable data gets to be close to actual AI, because that data is likely a lot more efficient than any other company (except IBM and Oracle) might have. As such that version of NIP will accelerate a lot all over the Aramco field. It will not have data of things it never faced before, but this setting might not cover a whole area, merely spots. And don’t take my word for it. A software package made by Systat Software Inc. called Systat worked on that premise long before people started digging into ML and DML, they set that parameter and whilst it is now Grafiti LLC (after SPSS had a go at it and became IBM) it seems that this setting is a seemingly pure win for IBM. 

A setting that should also reexamine all others to consider that whilst AI is fake, the ground work that is DML/LLM is a good field to examine and whilst we might giggle at the people mentioning and holding onto AI, DML/LLM is an established behemoth of software solutions and as I see it, when a company has been involved with IBM from nearly its infancy, that data is likely almost 100% foolish user proof and has the error setting close to absolute zero. There are people who will disagree and consider that there are likely ID10T errors (a WAN/LAN expression that has grown over TCP/IP) I believe that the Aramco/IBM partnership is almost fused together and they have worked decades together towards IT infrastructure cohesion and as I see it, the government of Saudi Arabia is all about harnessing its golden goose laying black eggs is a fusion that both parties regard as essential, the KSA to protect the income of its nation and the welfare of its citizens and IBM to keep their customer happy and content. Happy is almost easy, content is not that easy and IBM managed both for decades. As such I think that this setting is one that will work and pay off. 

So whilst I see the statement: “By collaborating with Aramco, we are exploring how emerging technologies are addressing some of the world’s most complex industrial challenges, while reinforcing our shared commitment to continuous investment in innovation” as a little presentative, the truth is that they have been working together for decades and there is little doubt in my mind that whatever comes from this will get the small percentages of gain closer towards 100% and don’t mock this setting, because Aramco is likely to gain $4.1 billion for every 1% gained, as such this is about serious money. Not some kind Azure wizard you see in almost every grocery store making them a few dollars per year. How much they will gain? I have no idea, because the oil refinery is set to a lot more than one product, but in this setting a 3% clear in the beginning is to be expected and that is over $12 billion, a billion for every month. When did you ever get that much of an increase of revenue? I only know of one man who achieved that, making it a one in 8.3 billion chance (that individual is labeled Elon Musk, look him up).

So whilst some say that this is splitting the margins of profits, I say that either you put up that $230 million a week or shut up. A clear setting of simple math and IBM can do math like no one else does. Have a great day.

Leave a comment

Filed under Finance, IT, Media, Science

The path we make

The path we make is often set, for one, you cannot walk the path of (fake) AI without considering the side-roads called Data Verification and Data Validation. They are intertwined. And whenever I get to Data Validation, NASA tends to be own my mind. They have been on the Data Validation path as early as the 70’s, long before whomever runs IBM/Microsoft/Google now, they were already looking at ways to support their validation tracks. So when I see the combination of NASA and DATA I tend to look up and take notice. So when we get ‘NASA POWER’s PRUVE Tool Streamlines Data Validation’ (at https://www.earthdata.nasa.gov/news/blog/nasa-powers-pruve-tool-streamlines-data-validation) where we see “NASA’s archive of Earth observation and modeling datasets has an incredibly diverse range of uses, and assessing data uncertainty is a critical step toward ensuring the data and analyses are accurate, reliable, and trustworthy. Several factors, such as instrument calibration, atmospheric corrections, and land-surface albedo, can affect the quality of satellite data. For users working with solar and meteorological datasets, quantifying uncertainty is especially critical, as these data often inform decisions and policymaking at the community level.” And this introduction leads towards the two quotes “NASA’s Prediction of Worldwide Energy Resources (POWER) project, which provides datasets from NASA in support of energy, buildings, and agroclimatology decisions, developed a tool that enables users to assess data uncertainty for selected surface variables from POWER’s data catalog with corresponding surface measurements.” And “The cloud-based tool — the PaRameter Uncertainty ViEwer (PRUVE) — makes assessing data uncertainty more straightforward for users across disciplines and skill levels. PRUVE uses surface observed site meteorological data from the National Oceanic and Atmospheric Administration (NOAA) and surface radiation data from Baseline Surface Radiation Network (BSRN) to compare against POWER-provided surface meteorological and radiation data values. This user-friendly application gives users an opportunity to quickly confirm data validation through customizable queries.

So when we see “By creating the free, easy-to-use PRUVE tool, the POWER team instills an additional layer of trust, empowering users to tackle some of the most important long-term weather challenges facing our planet.” I feel doubt and I do know that this is in me, not because of what is promised, but consider the settings in the example we see “a student wanting to install a small wind turbine for a study project at their college. They are limited by size and cost, so they need to make sure the predictions and analyses are reliable. As part of the study, they can use wind and other historical data parameters available through POWER to forecast how much energy will be produced from the wind turbine system. The student wants to limit the level of uncertainty in their prediction calculations as much as possible.” All whilst we also see:

So where is the doubt? You see for the most there is no doubt in the powers that ‘reside’ within NASA, but when you see these facts, why this system is not ‘coexisting’ in the Google, IBM or Microsoft clouds? This system should (read: optionally could) be adjustable to these fake AI systems to smooth over validation and reduce error in whatever data there is. And I do know that it is not that simple, but consider the settings that are lacking now, the transference of these options might also fill the coffers of NASA and there is no way they don’t need that. And as my skeptical self realizes nearly all the data systems on the planet require additional layers of trust, but that might merely be me. 

So as I see it, nearly all data systems are set towards some setting that there is some side solution towards data validity, all whilst there is a direct need to make checking the validity of data a main priority. So what happens when this solution gets additional layers of data validation, in part in statistics to see if the validation sets statistical boundaries whether the data set in some normal way, but that limits the setting is an outlier is found, so how can that be validated? Then there are multiple factors where a value should behave in certain ways, but it would not be easy. I reckon that NASA could pull it off and it would be a tool that everyone needs. I merely wonder why no-one has considered it before. Now, I do understand that it is a tall order and I might be incorrect (read: full of it) but consider how meteorological numbers are achieved, consider that there will be error, but a setting that reduces error in validation. A system that reiterates the data given and considers whether validation passes of fails. A system like that could be made, but the issue are the outliers, so what makes an outlier valid, because if one outlier is wrongfully ‘deleted’ the data set could become invalid. So is this possible? I think that only NASA with its expertise could make such a system a reality, making data validation more readily available. Because no matter what verification process follows and whilst we await the coming of real AI, validation will still be a setting that is required in whatever data system comes to the surface of true AI. And perhaps the system will become a verification setting, both are required and neither system seems to be ‘correctly’ developed at present. It is a horrible conundrum, but it requires contemplating as such a system is needed by the time Real AI comes to all our doorsteps. 

The additional issues I see is that in this case the PRUVE tool has all these connecting data segments, but what happens when it is a little more complex? We have all our minds set to ‘connected’ data, but it isn’t that simple at times. Consider the ludicrous setting of length and shoe size. Now we can understand the setting of a 4’8” person with 17” shoes (he wishes), but is it out of the realm of possibilities? There is a girl named Shae, who claims she knows one person with that description (Game of Thrones joke). So how would you be able to validate this? Perhaps other data is required to make the clear distinction valid and how could such a system make validation reliable? As I see it, the biggest problem into validating data is being able to recognise the outliers. I see the deletion of outliers as a problem, the data loses reliability and verification become next to impossible. Its like watching a dataset limited without data from the Interquartile Range (or 3-Sigma Rule) and as I see it, whatever data you remain with makes actions like fraud detection close to impossible (unless that transgressor is extraordinary stupid). You see there is the ‘old’ premise that “Outliers can bias statistical estimates, causing inaccurate results in predictive models or misrepresentations in descriptive statistics.” I am not saying it is incorrect, but the absence of outliers could make the validity of that data a lot more dubious and finding this is a real challenge, so as far as I see it, That is a job for NASA (the keyword Superman was already taken by DC comics). 

So see this as a little trip on the brainstorming front, I definitely need a hobby and I am all out of licorice.

Leave a comment

Filed under IT, Science

I call it fake for a reason

I was battling what to write about and there was Elon Musk giving me a perfectly good reason right of the bat. Well, it wasn’t Elon who gave me the idea, it was his product Grok. I have always said that AI is not real because of the missing parts, and it comes with a few constraints by certain (so called) captains of industry who are lacking in several ways. It is also connected to some other things I do. You see, no matter how you come, how much you innovate the idea, you will end up with a mere 0.1%-1% of the true value of the product. Todays ‘captains’ are utterly set into the exploitation of everything they see. As such I put it on my blog. When my stuff is in the open they cannot really claim any innovation. You see the IP is no longer protected by intellectual property laws, and the public is free to use, share, and build upon these works without seeking permission from the original creator. I might get something out of it but for the most I get the satisfaction that these ‘captains’ see the loss of an idea towards everyone. If I am unable to get something out of it, it will become Public Domain and perhaps it will spread my fame in that way. Some will smile at this and call me stupid (or a fool) but I am out of their reach for exploitation. As I see it, I gave the world over a dozen options for enrichment and in this way the Indie developers get a leg up without fear that a larger player will cut them out. Small comfort. But that is what is.

So, whilst I diverted, it was for a reason. You see the AI of now is fake AI (at best), all of them are because the two elements missing are evolved versions of Shallow circuits, as stated (for as far as I know) IBM has the strongest version of this, but still another system is required, a trinary operating system. Binary will not do for AI, the setting of Null, False, True and both is required for a true AI to come and no-one has that yet. A dutch physician got the Epsilon particle made (or found), this was going to be instrumental and to evolve this in an IT setting (most likely through yet undetermined means), but I digress, what I believe to be a weakness, doesn’t make it true. Alternative evidence is needed and I found it a few times over, but in this case I will revert to my last story ‘As oil burns’ which I published on May 4th, 2026 at 12:33. About an hour later I used Grok to look at my story. The first view after an hour was:

This is what AI does? Is that really a view on what I wrote on: https://lawlordtobe.com/2026/05/04/as-oil-burns/

A story containing 986 words with more than 523 words (which is 54%) on Russia, the top line gives zero consideration on Russia, it gave me another thought, but Ill get to that later. The second view (on the same text) was after 6 hours and there we see:

So what AI requires 6 hours to give better show of the same text? So, is my view of ‘Fake AI’ still wrong? As you can see the first part also gives no mention of the BBC and a few other parts. I got to the thought that this DML/LLM engine is allegedly used to filter out certain parts, until it can no longer hide a few things. Don’t forget whatever is done in DML/LLM is programmed by engineers, and whatever they say it is, that is what it becomes. People forget that and it is why thy fall in the AI trap, even though some clearly see that it is a fake solution. Don’t get me wrong DML and LLM are amazing inventions, but the courts will see through this and someone will blame the programmers and their bosses, this is why I saw the court cases come to blows in 2026. I particularly liked AI Misuse in Australian Courts (2026) where we see “over 73 cases identified where GenAI produced false citations.” So what AI does produce false citations? That requires a programmer. In addition, related to that is Warner v. Gilbarco, Inc. (February 2026) where we see the quote “AI to assist in case preparation does not automatically waive attorney-client privilege, characterizing broad requests for AI-generated documentation as a “fishing expedition”” Does this imply the AI uses deception to give us a “fishing expedition” or did (a massive perhaps) a programmer set this situation? As the evidence is added up, we get to see a different setting, a setting that gives notice that we should aim our attention to the programmers and their bosses. So at some point the influencers will be called into court and it is already happening “legal battles surrounding AI influencers, digital replicas, and content generation have shifted toward establishing liability for harmful outputs and defining the limits of AI-generated content protection. Key developments in early 2026 include lawsuits over AI-generated sexual content and major court decisions regarding copyright of AI-driven work.” Where we see (at present):

And as these cases are resolved, the influencer drive of AI will dissipate and we get these bosses to ‘present’ their view, but they will be careful as they are decently unwilling (as I see it) to become liable. So whilst I will look to find a party to allocate $5M (post taxation) to my coffers, I will try to remain vigilant and see what other things some of these ‘Captains of industry’ have been overlooking. Apparently some say I need a hobby, time will tell. Have a great day.

Leave a comment

Filed under IT, Media, Science

Battle lines

As per yesterday several things occupy my brain, even a new technology (which I will discuss at a later stage) today is about OpenAI and Microsoft. I was ‘alerted’ to this yesterday through through Seeking alpha. I think I heard it before that, but I ignored it. Seeking Alpha (at https://seekingalpha.com/news/4579947-microsoft-falls-as-openai-partnership-evolves-says-it-will-no-longer-pay-revenue-share) gives us ‘Microsoft in focus as OpenAI partnership evolves, says it will no longer pay revenue share’ and we are given “Microsoft (MSFT) shares rose fractionally on Monday as the tech giant and OpenAI (OPENAI) said their partnership has continued to evolve, and OpenAI’s license will become non-exclusive. “Today, we are announcing an amended agreement to simplify our partnership and the way we work together, grounded in flexibility, certainty and a focus on delivering the benefits of AI broadly,” Microsoft wrote in a statement on its website. “The greater predictability in the amended agreement strengthens our joint ability to build and operate AI platforms at scale while providing both companies the flexibility to pursue new opportunities.”” In my mind I hear “Someone has figured out that this setting is based on shallow settings, the reality is dawning on them”, so whilst we are given “As part of the altered agreement, Microsoft will remain OpenAI’s primary cloud partner, and OpenAI products will ship on Azure first. However, there is now a tweak that says if Microsoft “cannot and chooses not to support the necessary capabilities,” OpenAI can go elsewhere. Julian Lin, Investing Group Leader for Best Of Breed Growth Stocks, said the deal is actually a “net positive” for Microsoft, despite the share price reaction.” I personally believe that OpenAI might present a hardcore liability for Microsoft and they are seeking to insulate from that fallout. And it might be merely my feelings in this and that is fine, but when you see the Anthropic setting, the DeepSeek setting there are several other elements that are roaring is near ugly heard and that has to go somewhere, something has got to break and it seems the ‘staged’ setting of evolutionary contract agree ments, might be part of all that. In retrospect I have no idea how OpenAI and Musk will battle their settings (and I partially do not care either). But the elements are there and whilst we are all about OpenAI, this concept selling setting rubs me the wrong way. So whilst we ‘might’ see ‘OpenAI Misses Key Revenue, User Targets in High-Stakes Sprint Toward IPO’, all whilst some say “do you guys even use ChatGPT/OpenAI anymore? I find myself preferring Claude/Gemini to be honest”, I take a different turn, I don’t use any of them. Basically because they are all fake AI. Real AI is about a decade away, if not 2 decades. I might die before real AI is released, so I kinda do not care.

ComputerWorld, only today (a mere few hours ago) gave us (at https://www.computerworld.com/article/4163971/microsoft-openai-change-contract-terms-again.html) ‘Microsoft, OpenAI change contract terms–again’ starts with “When the two firms announced a revised agreement on Monday, it reinforced the need for enterprise IT executives to work with as many major AI players as possible, given the constantly changing landscape.” I do not disagree, but remember that Microsoft went all out about 5 years ago and whilst we saw all kinds of ‘total wreck approaches’ the ‘partnership’ went on and now that we see “the need for enterprise IT executives to work with as many major AI players as possible”, we might accept that, but we see no DeepSeek, do we? So whilst we see that Microsoft increased its stake and solidified its position as a major investor less than 6 months ago, these plans are now changing. So does Microsoft see something, or do they fear something? And then ComputerWorld gives us “One key component within earlier versions of the Microsoft-OpenAI deal was the change in the relationship if OpenAI ever achieved artificial general intelligence (AGI), a term that eludes a concrete definition but generally refers to AI that equals or exceeds human capabilities.” I find it funny because of all these definitions across the fake AI field. Do they really not see that it is about to fall apart? (Story to follow likely tomorrow). And when this war of the fakers is seen (OpenAI, Google, Anthropic) there is every chance that OpenAI ends up in last position (see another ‘winner’ chosen by Microsoft), but this war setting is almost real, but until there is a real revenue stream coming in, there is unlikely to be a real winner. So whilst ComputerWorld focusses on the market changes with “Analysts and consultants generally agreed that this altered agreement will reinforce, and should extend, the current enterprise IT trend of hedging bets by striking arrangements with a variety of AI providers, including the major hyperscalers. Beyond future-proofing enterprises’ AI efforts, some of those agreements are for practical issues, such as the need to work with global AI firms specializing in different languages that the enterprise needs.” And you already know where this goes next. So, when was the last time you saw this kinda bla bla settings in the last 45 years? I tend to go back to the early 90’s where they all tried to sign businesses up to concept selling, all whilst there was no revenue stream detectable. We see it now here. I get that analysts are not the most revenue sturdy people, but consultants need their revenue streams. It is their bread and butter. And what was that “for practical issues” about? You see ComputerWorld writes a good story and revenue is mentioned four times, three is shown next “In addition, the company’s role as a major investor in OpenAI is driving a different revenue relationship, it said: “Microsoft will no longer pay a revenue share to OpenAI. Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap. ”” interesting how salespeople are not that fuzzed about revenue. It is their income and bonus setting. So what was this really about?

Wouldn’t we like to know this? Just a few settings for todays stride in the coming week. And now I need to contemplate what I next write about the bad news, or the new technology. My conundrum  for the last 4 hours of the day.

Have a great one today.

Leave a comment

Filed under Finance, IT, Media, Science

Tomorrow came today

That is the setting and it is given to us by the Khaleej Times. There are two articles, the first one (at https://www.khaleejtimes.com/business/tech/carry-less-do-more-the-huawei-matepad-mini-advantage) gives us ‘Carry less, do more: The HUAWEI MatePad mini advantage’ it shows us the new Huawei setting, all in Harmony Next, so while we might consider “The 8.8-inch OLED PaperMatte display is considerably larger than any other ebook reader of this size and offers incredibly vibrant colours. Saying this is the best ebook reader ever is not a hyperbolic statement. While that alone makes the tablet worth having, it is only the tip of what the MatePad Mini has to offer.” It is not the real power that comes from the mindset of the consumer. You see I’m what some call a brand bitch. I like my Sony TV (and my playstation more) I like my Apple devices (except that Apple phone thingamajig) and I love my Android phone. We are what we embrace and now Huawei in a world where the United States claim that China is evil we are given the new settings. You see, that anti China voice is kinda nice, but as the confidence in the United States is waning with 6 billion people, that anti-China rhetoric becomes stale and lacks credibility. And now Huawei who awaited their time is voicing into the Middle East that there is an non-United States alternative. And that comes with a few additional loopholes.

So whist we are given “Beyond readability, the MatePad Mini supports a peak brightness of 1800 nits, a 120 Hz refresh rate, and a P3 wide colour gamut for rich, lifelike visuals. Easily pocketable and featuring a vibrant, high-resolution, paper-like display, the MatePad Mini is a strong alternative to traditional eBook readers.”as well as “Powering all of this is a 6400 mAh battery, capable of delivering up to 9.5 hours of usage under dynamic conditions, and it can be filled up from zero in just 60 minutes using Turbo mode. The HUAWEI MatePad Mini is compact enough to carry anywhere, yet powerful enough to handle everything from reading to serious productivity and creative work.” And that is beyond the additional apps that give is a rather large function area. This is the first time that Apple faces a competitor larger then they are, more of more and all at a reduced price. So whilst I am Apple minded for my iPad, Huawei now had an alternative and it is loaded with functionality. Is it enough? I am not certain, but as the anti-United States feeling emerge (due to the current administration) and the feeling of resentment grows, Huawei now has a clear path into Europe and people are fed up with the anti China sentiment. Especially as it lacked evidence for the longest times and now that the United States is told to stay in its place. The sentiment for American corporations grow too and there are two settings that fuel this.

The second setting is given to us (at https://www.khaleejtimes.com/business/tech/ai-without-the-hype-the-new-honor-600-redefines-the-smartest-smartphone-experience) where we see ‘AI without the hype: The new HONOR 600 redefines the smartest smartphone experience’ and that is the missing element ‘without the hype’ it redefines the setting of DML and ML, because that is the setting of these Fake AI worlds. Fake AI is hyped by the United States and some resent it (like me) because it is stupid. DML and ML are great tools and they come with LLM settings, which is also a great tool but it is no AI, so as we are given this, we are more easily in acceptance of this. So whilst we see “In a market flooded with overpromised AI features, the HONOR 600 stands apart, pairing a stunning 200MP camera, intuitive AI tools, and marathon battery life into a device that feels as premium as it performs” we see a delivery well beyond any phone out there today the 200MP camera. So whilst we are given “I’ve spent a little time with the new HONOR 600 these last few days, and from the moment I picked it up, it felt like I was holding something far more premium than its category suggests. The design immediately stands out. It’s slim, sleek, and beautifully balanced in the hand. The finish of our test mule in the “Golden White” colourway (there are two other colours available: Black and Orange) catches the light in a subtle but striking way, and the overall build feels refined without being flashy. It’s the kind of phone you instinctively want to show off, not because it’s loud, but because it’s quietly elegant.” We see the next device in HarmonyOS and it will be a threat to Android and iOS. Their 200 MP made it so and whilst we see the stages where some will debate (the ‘but this’ and ‘but that’ people) we see a setting that is water-mouthing for people and influencers alike (influencers are considered to be non-people). 

What we have is the setting for the new stages. We see that Huawei is more readily excepted and that comes with the optional Huawei data centers and that is where the United States will truly be shown the door. And as Huawei gains traction vie the Middle East, there is every indication that the larger stages in Bangladesh, India and Indonesia will embrace that setting as these two places are over half a billion people and Huawei will gain traction to over 2 billion people in this year alone. That is the setting everyone missed and that is what is likely propping to happen. And this is the stage that the United States fears, because their ‘big beautiful whatever’ depends on an audience and one third of the global stage when somewhere else. I reckon that Germany is the first to gain Huawei powers in the EU, followed by some of the other members. My money is on the Scandinavian members driven by Denmark (because of Greenland) and Norway (because of Microsoft) and that will merely be more and more movement towards China. And whilst some will debate the bad things that is China. You forget about the 8 billion people, they are driven by consumerism and quality stuff and Huawei is showing quality and as I see it, it is the first time they are outdoing Apple and when you consider the Huawei Matebook fold. So when the new applications hit these solutions and when (perhaps they already are) we see interaction between the three you know that Apple is outdone and Google will be in a tough spot. It was never their ambition to be in this situation but some idiot in the American administration made China develop their own OS, because Android was no longer available to them, who was that again?

So we now get a new setting and I reckon it will come to blows in 2027, even as Huawei is already ready in 2026. It is a stage that is now up for grabs and when these 4 factors Tablet, phone, laptop and data center becomes available, the United States will be pricing itself out of all the above. So we are likely to see Gulf States, India, Bangladesh, Indonesia and Europe all switching and whilst the United States sees its influence shrinking from 6.5 to 6.2 to 4.9 to 4.5 to a 4.1 to a 3.8 billion audience panic will hit because that implies that there is an expected grow in Huawei data centers and even as it might not all go for a Huawei data center, the premise that it all remains with America data centers is absolutely ludicrous. So whilst the United States depletes its weapons even further on Iranian soil, it is merely fueling it disgust in the rest of the global population. A setting that was almost clear from the start. So where do you think this audience go when it is reduced to a mere 4.1 billion? You might think that it is clear, but the Muslim population is almost 2 billion, so do you thin that Iran will entice them to stay? Or will they merely fuel the drive towards Huawei?

Have a great day this day.

Leave a comment

Filed under IT, Media, Politics, Science

Accusation without evidence

That is the path I saw today on the BBC (at https://www.bbc.com/news/articles/cpqxgxx9nrqo), now hear me out. Even as we are being told ‘White House memo claims mass AI theft by Chinese firms’ we have to acknowledge that it comes from that same place that gave us that “‘someone’ claimed “$18 trillion” in new investments”, “prices are down” and “Ukraine for starting the war with Russia, suggesting they should have surrendered territory to avoid it” as such I am willing to disbelief this. Also China has DeepSeek and it does so (it’s speculations) at a fraction of the cost.

And whilst we are getting “The White House has said it will work more closely with US artificial intelligence (AI) firms to combat “industrial-scale campaigns” by foreign actors to steal advances in the technology. Michael Kratsios, Director of Science and Technology Policy, wrote in an internal memo that the administration had new information indicating “foreign entities, principally based in China” were exploiting American firms.” My mind goes not different directions. The first being:

My mind is racing towards a different setting. You see, OpenAI and its ‘co-conspirators’ are not delivering on the premise that gave too many people well over half a trillion dollars want to see return on investment and none is coming and now (not unlike the concept sellers in the 90’s) they need a blamable party. So what is easier than to blame China? Now, I am not saying that China is innocent, but in all this one might need evidence to make a case and none of it seems to be coming. As such we are given ““foreign entities, principally based in China” were exploiting American firms. Through a process called “distilling”, such firms are essentially copying AI technology developed by US companies, he said.” OK, I’ll bite, so where is the evidence? Why, if this distilling is a problem are these outputs not better protected, so there is no ‘distilling’? Simple question, perhaps when Oracle was needed, the cheapskates decided to rely on Azure? I have no idea, I am merely offering options as the evidence is clearly lacking. 

So whilst the article ens with “While Kratsios did not name any foreign entities, leading AI companies like OpenAI and Anthropic have said they are dealing with such distillation activity.” I reckon that the distillation culprits like House Spirits Distillery and Angostura Distillery were made exempt? 

You think that I am making a funny and I was, but this has been going on for months and these so called high priced (fake) AI corporations have been absent in their cyber security? How does this distilling happen? All things missing from the BBC article and are unlikely on the mind of the White House as the article seems to imply it comes from the very beginning where we saw “it will work more closely with US artificial intelligence (AI) firms to combat “industrial-scale campaigns” by foreign actors to steal advances in the technology” you see, the first part would be ‘How did they achieve this?’ Which we do not see and the state of cyber security we don’t see either, both seem rather obvious in that setting. 

So as I said China might not be innocent, but in that same setting we see that the United States and their (fake) AI firms are apparently clueless. Don’t take my word for it, just look at the scraps on this table and see where the crumbs aren’t dealt with and I see no part in all this that shouts ‘China is guilty’ that would require actual evidence. So if that is seemingly is not required counter the idea of this AI scheme to be the part of a scam to wipe out trillions on the exchange, which might be the case, but the setting of ‘no evidence’ is apparently in effect and that goes both ways. As I see it, someone wants to see evidence of AI and whilst they invested billions, there is a greed driven setting that the profits all go to China as they stole the plans, but is that really so? Even distilled plans need refinement and the source data is missing. So, how would they proceed? The setting does not make complete sense to me. Any innovation requires a foundation, even DeepSeek would like to have one, or it is simply a sifting solution and the power remains with these innovative wannabe’s (sorry, a paraphrased term).

So have a great day and wonder why the accusation was made, because that setting is likely to be in dollar numbers and where is that money now? Have a great day.

Leave a comment

Filed under Finance, IT, Science