Tag Archives: OpenAI

SYSMIS(plenty)

Yes, this is sort of a hidden setting, but if you know the program you will be ahead of the rest (for now). Less then an hour ago I saw a picture with Larry Ellison (must be an intelligent person as we have the same first two letters in our first name). But the story is not really that, perhaps it is, but i’ll get to that later.

I will agree with the generic setting that most of the most valuable data will be seen in Oracle. It is the second part I have an issue with (even though it sounds correct), yes AI demands is skyrocketing. But as I personally see it AI does not exist. There is Generic AI, there are AI agents and there are a dozen settings under the sun advocating a non existing realm of existence. I am not going into this, as I have done that several times before. You see, what is called AI is as I see it mere NIP (Near Intelligent Parsing) and that does need a little explaining. 

You see, like the old chess computers (90’s) they weren’t intelligent, they merely had in memory every chess game ever played above a certain level. And all these moves were in these computers. As such there was every chance that the chess computer came into a setting where that board was encountered before and as such it tried to play from that point onwards. It is a little more advanced than that, but that was the setting we faced. And would you have it, some greed driven salesperson will push the boundary towards that setting where he (or she) will claim that the data you have will result in better sales. But (a massive ‘but’ comes along) that is assuming all data is there and mostly that is never the case. So if we see the next image

You see that some cells are red, there we have no data and data that isn’t there cannot be created (sort of). In Market Research it is called System Missing data. They know what to do in those case, but the bulk of all the people trying to run and hide behind there data will be in the knowing nothing pool of people. And this data set has a few hidden issues. Response 6 and 7 are missing. So were they never there? Is there another reason? All things that these AI systems are unaware of and until they are taught what to do your data will create a mess you never saw before. Sales people (for the most) do not see it that way, because they were sold an AI system. Yet until someone teaches them what to do they aren’t anything of the sort and even after they are taught there are still gaps in their knowledge because these systems will not assume until told so. They will not even know what to do when it goes wring until someone tells them that and the salespeople using these systems will revert to ‘easy’ fixes, which are not fixes at all, they merely see the larger setting that becomes less and less accurate in record time. They will rely on predictive analytics, but that solution can only work with data that is there and when there is no data, there is merely no data to rely on. And that is the trap I foresaw in the case of [a censored software company] and the UAE and oil. There is too much unknowns and I reckon that the oil industry will have a lot more data and bigger data, but with human elements in play, we will see missing data. And the better the data is, the more accurate the results. But as I saw it, errors start creeping in and more and more inaccuracies are set to the predictive data set and that is where the problems start. It is not speculative, it is a dead certainty. This will happen. No matter how good you are, these systems are build too fast with too little training and too little error seeking. This will go wrong. Still Larry is right “Most Of The World’s Valuable Data Is in some system

The problem is that no dataset is 100% complete, it never was and that is the miscalculations to CEO’s of tomorrow are making. And the assumption mode of the sales person selling and the sales person buying are in a dwindling setting as they are all on the AI mountain whilst there is every chance that several people will use AI as a gimmick sale and they don’t have a clue what they are buying, all whilst these people sign a ‘as is’ software solution. So when this comes to blows, the impact will be massive. We recently saw Microsoft standing behind builder.ai and it went broke. It seems that no one saw the 700 engineers programming it all (in this case I am not blaming Microsoft) but it leaves me with questions. And the setting of “Stargate is a $500 billion joint venture between OpenAI, SoftBank, Oracle, and investment firm MGX to build a massive AI infrastructure in the United States. The project, announced by Donald Trump, aims to establish the US as a leader in AI by constructing large-scale data centers and advancing AI research. Initial construction is underway in Texas, with plans for 20 data centers, each 500,000 square feet, within the next five years” leaves me with more questions. I do not doubt that OpenAI, SoftBank and Oracle all have the best intentions. But I have two questions on this. The first is how to align and verify the data, because that will be an adamant and also a essential step in this. Then we get to the larger setting that the dat needs to align within itself. Are all the phrases exact? I don’t know this is why I ask and before you say that it makes sense that they do but reality gives us ‘SQUARE-WINDOWED AIRPLANES’ 1954 when two planes broke apart in mid-flight because metal fatigue was causing small cracks to form at the edges of the windows, and the pressurized cabins exploded. Then we have the ‘MARS ORBITER’ where two sets of engineers, one working in metric and the other working in the U.S. imperial system, failed to communicate at crucial moments in constructing the $125 million spacecraft. We tend to learn when we stumble that is a given, so what happens when issues are found in the 11th hour in a 500 billion dollar setting? It is not unheard of and as I saw one particular speculative setting. How is this powered? A system on 500,000 square feet needs power and 20 of them a hell of a lot more. So how many nuclear reactors are planned? I actually have an interesting idea (keeping this to me for now). But any computer that leaks power will go down immediately and all those training time is lost. How often does that need to happen for it to go wrong? You can train and test systems individually but 20 data centers need power, even one needs power and how certain is that power grid? I actually saw nothing of that in any literature (might be that only a few have seen that), but the drastic setting from sales people tends to be, lets put in more power. But where from? Power is finite until created in advance and that is something I haven’t seen. And then the time setting ‘within the next 5 years’ As I see it, this is a disaster waiting to happen. And as this starts in Texas, we have the quote “According to Texas native, Co-Founder and CFO of Atma Energy, Jaro Nummikoski, one of the main reasons Texas struggles with chronic power outages is the way our grid was originally designed—centralized power plants feeding energy over long distances through aging infrastructure.” Now I am certain that the power-grid of a data centre will be top notch, but where does that power come from? And 500,000 sqft needs a lot of power, I honestly do not know how much One source gave me “The facilities need at least 50 Megawatts (MW) of power supply, but some installations surpass this capacity. The energy requirements of the project will increase to 15 Gigawatts (GW) because of the ten data centers currently under construction, which equals the electricity usage of a small nation.” As such the call for a nuclear reactor comes to mind, yet the call for 15 GW is insane, and no reactor at present exists to handle that. 50MW per data center implies that where there is a data centre a reactor will be needed (OK, this is an exaggeration) but where there are more than one (up to 4) a reactor will be needed. So who was aware of this? I reckon that the first centre in Texas will get a reactor as Texas has plenty of power shortages and the increase in people and systems warrant such a move. But as far as I know those things will require a little more than 5 years and depending on the provider there are different timelines. As such I have reasons to doubt the 5 year setting (even more when we consider data). 

As such I wonder when the media will actually look at the settings and what will be achievable as well as being implemented and that is before we get to the training of data of these capers. As I personally (and speculatively) see it, will these data centers come with a warning light telling us SYSMIS(plenty), or a ‘too many holes in data error’ just a thought to have this Tuesday. 

Have a great day and when your chest glows in the dark you might be close to one of those nuclear reactors. 

Leave a comment

Filed under Finance, IT, Media, Science

The call for investors

That is at present the larger setting, everyone wants investors and they all tend to promise the calf with golden horns. As I see it, investing in gold mining, Oil mining and a few others are near dead certain return on investments. The larger group that will seemingly want to invest in AI, the new hype word. Still, considering that Builder.ai went from a billion plus to zilch is a nice example what  Microsoft backed solutions tend to give. You see, the larger picture that everyone is ignoring is that it was baked by Microsoft. Now, this might be OK, because Microsoft is a tech company. But consider that Builder.ai (previous known as Engineer.ai) was supposed to be all ‘good’, yet the media now reports ‘Builder.ai Collapsed After Finding Sales ‘Inflated By 300 Percent’’ This leads me to believe that there was  larger problem with this DML/LLM solution. Another source gives us ‘Builder.ai’s Collapse Exposes Deceptive AI Claims, Shocking Major Investors’ and another source gives us ‘Builder.ai collapse exposes dangers of ‘FOMO investing’ in AI’ yet that is nothing compared to what I said on November 16th 2024 in ‘Is it a public service’ (at https://lawlordtobe.com/2024/11/16/is-it-a-public-service/) where I stated “a US strategy to prevent a Chinese military tech grab in the Gulf region” and it is my insight that this is a clicking clock. One tick, one tock leading to one mishap and Microsoft pretty much gives the store to China. And with that Aramco laughingly watches from the sidelines. There is no if in question. This becomes a mere shifting timeline and with every day that timeline becomes a lot more worrying.” With the added “But several sources state “There are several reasons why General AI is not yet a reality. However, there are various theories as to what why: The required processing power doesn’t exist yet. As soon as we have more powerful machines (or quantum computing), our current algorithms will help us create a General AI” or to some extent. Marketing the spin of AI does not make it so.” You see, the entire DML/LLM is not AI, as we can see from the builder.ai setting (a little presumptuous) of me, but the setting that we get inflated sales and then the Register ended their article with “The fact that it wasn’t able to convince enough customers to pay it enough money to stay solvent should give pause to those who see generative AI as a replacement for junior developers. As the experience of the unfortunate Microsoft staffers having to deal with the GitHub Copilot Agent shows, the technology still has some way to go. One day it might surpass a mediocre intern able to work a search engine, but that day is not today.” Is perhaps merely part of the problem the “the technology still has some way to go” is astute and to the point, but it is not the larger problem. It reminded me of the old market research setting, take a bucket of data and let MANOVA sort it out. The idea that a layman can sort it out is hilarious. I have met over the last half a century less than a dozen people who know that they were doing. These people are extremely rare. So whenever I hear a student tell me that they had a good solution with MANOVA, my eyes were tearing with howls of deriving laughter. And now we see a similar setting. But the larger setting is not merely the coded setting of DML and LLM. It is the stage where data is either not verified or verified in the most shallow of situations. And now consider that stage with a 500 billion solution. Data is everything there and verification is one part of that key, a key too many are seeing aside because it is not sexy enough. 

And now we get to the investors who are in “Fear Of Missing Out”, for them I have a consolation price. You see, RigZone gave me (at https://www.rigzone.com/news/adnoc_suppliers_pledge_817mm_investment_for_uae_manufacturing-27-may-2025-180646-article/) hours ago ‘ADNOC Suppliers Pledge $817MM Investment for UAE Manufacturing’, and as I see it Oil is a near certainty of achieving ROI, and as everyone is chasing the AI dream (which of course does not exist yet) those greedy hungry money people are looking away from the certainty piggybank (as I personally see it) and that kind of investment for manufacturing will bring products, sellable products and in the petrochemical industry that is like butter with the fish. A near certainty on investment. I prefer the expression ‘near certainty’ as there is always some risk, yet as I see it, ARAMCO and ADNOC are setting the bar of achievement high enough to get that done and as I see it “ADNOC said the facilities are situated throughout the Industrial City of Abu Dhabi (ICAD), Khalifa Economic Zones Abu Dhabi (KEZAD), Dubai Industrial Park, Jebel Ali Free Zone (JAFZA), Sharjah Airport International Free Zone (SAIF Zone), and Umm Al Quwain. They will generate over 3,500 high-skilled jobs in the private sector and produce a diverse array of industrial goods such as pressure vessels, pipe coatings, and fasteners.” As such the only danger is that ADNOC will not be able to fill the positions and that is at present the easiest score to settle. 

So as we see the call for investors coming from the sound of a dozen bugles, remember that the old premise that getting the call from a setting that works beats the golden horns that some promise and the investors will need another setting (or so I figure). And in the end, the larger question is why builder.ai was backed inn the first place. Microsoft has a setting with OpenAI and as one source gives me “Microsoft and OpenAI have a significant partnership, where Microsoft is a major investor and supports OpenAI’s advancements, and OpenAI provides access to powerful language models through Microsoft’s Azure platform. This partnership enables Azure OpenAI Service, which provides access to OpenAI’s models for businesses, and it also includes a revenue-sharing agreement.” I cannot vouch for the source, but the idea is when this is going on, why go to it with builder.ai? And was builder.ai vetted? The entire setting is raising more questions than I normally would have (sellers have their own agenda and including Microsoft in this is ‘to them’ a normal setting) I do not oppose that, but when we see this interaction, I wonder how dangerous that Stargate will be and $500,000,000,000 ain’t hay. 

And going back to ADNOC we see “ADNOC’s commercial agreements under the In-Country Value (ICV) program have enabled facilities that allow businesses to benefit from diverse commercial opportunities, the company said. The ICV program aims to manufacture AED90 billion ($24.5 billion) worth of products locally in its procurement pipeline by 2030.” More impressive is the quote “ADNOC’s ICV program has contributed AED242 billion ($65.8 million) to the UAE economy and created 17,000 jobs for UAE nationals since 2018, according to the company.” You see, such a move makes sense as the UAE produces 3.22 million barrels per day, that has been achieved from 2024 onward and some say that they exceeded their quota (by how much is unknown to me). But that makes sense as an investment, the entire fictive AI setting does not and ever since the builder.ai setting it makes a lot less sense, if not for the simple reason that no one can clearly state where that billion plus went, oh and how many investments collapsed and who were those investors. Simple questions really.

Have a great day and try not to chase too many Edsel’s with your investment portfolio.

Leave a comment

Filed under Finance, IT, Media, Science

And the bubble said ‘Bang’

This is what we usually see, or at times hear as well. Now I am not an AI expert, not even a journeyman in the ways of AI, But the father of AI namely Alan Turing stated the setting of AI. He was that good as he set the foundation of AI in the 50’s, half a century before we were able to get a handle on this. Oh, and in case you forget what he looks like, he has been immortalised on the £50 note.

And as such I feel certain that there is no AI (at present) and now this bubble comes banging on the doors of big-tech as they just lost a trillion dollars in market value. Are you interested in seeing what that looks like? Well see below and scratch the back of your heads.

We start with Business Insider (at https://markets.businessinsider.com/news/stocks/tech-stock-sell-off-deepseek-ai-chatgpt-china-nvidia-chips-2025-1) where we are given ‘DeepSeek tech wipeout erases more than $1 trillion in market cap as AI panic grips Wall Street’ and I find it slightly hilarious as we see “AI panic”, you see, bubbles have that effect on markets. This takes me back to 2012 when the Australian Telstra had no recourse at that point to let the waves of 4G work for them (they had 3.5G at best) so what did they do? They called the product 4G, problem solved. I think they took some damage over time, but they prevented others taking the lead as they were lagging to some extent. Here in this case we are given “US stocks plummeted on Monday as traders fled the tech sector and erased more than $1 trillion in market cap amid panic over a new artificial intelligence app from a Chinese startup.” Now let me be clear, there is no AI. Not in America and not in China. What both do have is Deeper Machine Learning and LLM’s and these parts would in the end be part of a real AI. Just not the primary part (see my earlier works). Why has happened (me being speculative) is that China had an innovative idea of Deeper Machine Learning and package this innovatively with LLM modules so that the end result would be a much more efficient system. The Economic Times (at https://economictimes.indiatimes.com/markets/stocks/news/worlds-richest-people-lose-108-billion-after-deepseek-selloff/articleshow/117615451.cms) gives us ‘World’s richest people lose $108 billion after DeepSeek selloff’ what is more prudent is “DeepSeek’s dark-horse entry into the AI race, which it says cost just $5.6 million to develop, is a challenge to Silicon Valley’s narrative that massive capital spending is essential to developing the strongest models.” So all these ‘vendors’ and especially President Trump who stated “Emergence of cheaper Chinese rival has wiped $1tn off the value of leading US tech companies” (source: the Guardian). And with the Stargate investment on the mark for about 500 billion dollars it comes as a lightning strike. I wonder what the world makes of this. In all honesty I do not know what to believe and the setting of DeepSeek the game will change. In the first there are dozens of programers who need to figure out how the cost cutting was possible. Then there is the setting of what DeepSeek can actually do and here is the kicker. DeepSeek is free as such there will be a lot of people digging into that. What I wonder is what data is being collected by Chinese artificial intelligence company Hangzhou DeepSeek Artificial Intelligence Co., Ltd. It would be my take on the matter. When something is too cheap to be true, you better believe that there is a snag on the road making you look precisely in the wrong direction. I admit it is the cynic in me speaking, but the stage that they made a solution for 6 million (not Lee Majors) against ChatGPT coming at 100 million, the difference is just too big and I don’t like the difference. I know I might be all wrong here, but that is the initial intake I take in the matter. 

If it all works out there is a massive change in the so called AI field. A Chinese party basically sunk the American opposition. In other news, there is possibly reason to giggle here. You see, Microsoft Invested Nearly $14 Billion In OpenAI and that was merely months ago and now we see that  someone else did it at 43% of the investment and after all the hassles they had (Xbox) they shouldn’t be spending recklessly I get it, they merely all had that price picture and now we see another Chinese firm playing the super innovator. It is making me giggle. In opposition to this, we see all kind of player (Google, IBM, Meta, Oracle, Palantir) playing a similar game of what some call AI and they have set the bar really high, as such I wonder how they will continue the game if it turns out that DeepSeek really is the ‘bomb’ of Deeper Machine Learning. I reckon there will be a few interesting weeks coming up. 

Have fun, I need to lie still for 6 hours until breakfast (my life sucks).

3 Comments

Filed under Finance, IT, Media, Politics, Science

A changed setting

That is where I found myself a few days ago. The realisation that things weren’t what they were supposed to be. Now, it is not really new. Settings change, but for the most it is up to the makers to herald a certain stage of doing business. This is a strange telling, because I believe in the Robocop setting that Kurtwood Smith handed to us “Good business is where you find it” and for the most I believe this is true. The stage was handed to us by Satya Nadella when on December 26th 2024 he gave us “the era of SaaS as we know it is coming to an end, giving way to integrated platforms where AI becomes the central driver. This transformation is poised to disrupt traditional tools and workflows, paving the way for a new generation of applications.” Not only do I not believe him at present. He is paving the way for people to set doubt in a place and push them all towards Azura (i’ll get to this later). Still, this is a weird statement from Microsoft when we got on July 22nd 2024 ‘Microsoft joins forces with Austrade to help its Australian SaaS partners go global’ (at https://news.microsoft.com/en-au/features/microsoft-joins-forces-with-austrade-to-help-its-australian-saas-partners-go-global/), seems like a strange setting. And with the statement “Microsoft has today announced a new program in collaboration with the Australian Trade and Investment Commission (Austrade) to help local partners that offer software-as-a-service (SaaS) solutions accelerate their international growth” It almost sounds like the Asian joke “Two Wongs don’t make a Write” (or something like that). 

You see, as I personally see it, Microsoft is in trouble. It hatched its eggs too widely and too many of them are not paying off. There is only so many losses you can book and not take a massive hit. And as long as people are ‘dependent’ on Microsoft Nadella can sing whatever he wants. And that is where the shoe becomes a tight fit (and not in a good way). There is a cluster of people reposting and optionally with their ‘own’ insights as why it is such a stellar move. But there are issues.  You see, the first is that SaaS is a good solution for a lot of people, but as the Indian indie developers are gaining in that field Microsoft needs to haul exceedingly into another field where it is just them and their ‘agents’. And Microsoft will get a percentage for EVERY deployment we face.

The second setting is that SaaS goes together with IaaS and PaaS, but with the Microsoft setup all PaaS becomes Azure. It was the Microsoft solution to get from the statement “It is very possible to link single service of IaaS, PaaS and SaaS on 3 different cloud providers.” We got this answer three years ago and that never worked for Microsoft. You see, Microsoft wants it all. They failed too many times (in several fields). The need it all to survive and if enough are connected Microsoft (as I see it) prevents collapse. As I see it the AWS (Amazon) and the Oracle’s Platform as a service are vastly superior to Microsoft. As such Microsoft is dwindled down to size and they do not like it. I also think that Googles PaaS service is better than. Microsoft, but that is a more personal view then evidence driven. As such Microsoft needs to change speed and I reckon that the impending death proclamation of Software as a Service was Microsoft’s way to go and that is what Satya Nadella went with. The issue in this is an additional stage. In the 5 days of Christmas it is all that LinkedIn went with. I was torpedoed with these ‘news casts’ and opinionated settings from hundreds of sources (not only on LinkedIn) and these millennial sales screw ups all wanted a piece of that pie. They want it all whilst the getting was good and it is Christmas, wasn’t it? 

It is at this point when I wonder what Huawei has in store with their cloud solutions. It is the media appeasement of Microsoft that I wonder what the ‘enemy’ will bring us and that is where the setting stalls. The attack on our senses is almost infinite and some are deciding where we are able to (or allowed) to look. And we are all in the setting that we want to know where we can go and places like LinkedIn will not give us the full news making them propaganda channels for people like Microsoft. So when will we get the real deal of how to avoid Microsoft? I wonder what Oracle and/or AWS will bring to the table, them and Google would make a good replacement for Microsoft. But will we see that given to us, or is the influencer scene of Microsoft drowning it all out?

I cannot say for sure because the others are seemingly staying silent. Have a great day you all.

Leave a comment

Filed under Finance, IT, Media, Science

Reengineering an old solution

I was bending my mind over backwards to stay creative. And as I was mulling over something I read a year ago, my mind started to race towards an optional solution. You see, the idea is not novel but it has been forgotten. So if Tandon never renewed their patent, you get the exclusive option to rule there. If they have, you could file for an innovative patent, giving you still a decent payment for your trouble. 

Going back 34 years
Yes, it was the height of the IT innovative time and this age had plenty of failures, but it also had decent blockbusters and whilst they all wanted to rule the world, they clamped down on their IP innovations. Tandon was one of those.

As you can see in this image the drives (both of them) look like space hoarders, it was the age of Seagate with their 20MB or 30MB drives. The nice part was that these drives could be ejected. It was a novel idea where the CFO could put its drive with the books in the vault.  

Why is this an issue?
Well, last year I saw an article that well over 70% of all cloud accounts were invaded on. To see this we need to look (at https://www.cybersecuritydive.com/news/cloud-intrusions-spike-crowdstrike/708315/) where we see ‘Cloud intrusions spiked 75% in 2023, CrowdStrike says’ it comes with the text “Organisations with weak cloud security controls and gaps in cross-domain visibility are getting outmanoeuvred by threat actors and struck by intrusions” And this is not all. Captains of industry lacking IT knowledge will happily accept that free 1TB USB drive at a trade show, not realising that it also creates a backdoor on their servers. They shouldn’t be too upset, it happened to a few people at the Pentagon as well (as they are supposed to know what they are doing). So the cloud is a failing setting of security. So consider that, as well as Samsung putting their stuff online because they didn’t realise how to operate OpenAI. Just a few examples. So what is to stop their research or revenue results to be placed on a drive like the pre-cloud days?

You think I would put my IP in the cloud? Actually I did, but I have a rather nasty defence system that is a repeated action I learned in 1988 and no one has a clue where to look (and I never put it with the usual suspects), but this is me and I will not give you that trick because all kinds of people read my blog. 

So back to Tandon. In stead of this big drive, consider a normal drive space and in stead of that big box. Consider a tray with enough space to fit an SDD with the connector inside the tray, going to a plug on the outside of the tray. With a simple kit that can be purchased if more than one drive is used. Now see the Tandon solution as it could be. An ejectable drive solution for many. Yes you can connect just a wire and use an external SSD, but it becomes messy and these wires can also malfunction. There is even the option of adding AES256 that could be added in the drive on one side, so even if they steal the drive (optionally with computer) the thieves lose out as a dongle could be required. It merely depends on how secure you want the data to be. A CFO might rely on his safe for the books. An IP research post might need more security. So consider if you want to be the optional victim staged in the 75%, or do you need your data to be secure. 

So whomever take the idea and reengineer it (with optional extras), you are welcome and have a nice day. I just completed 12.5% of Monday, time to snore like a lumberjack.

Leave a comment

Filed under IT, Science

Not changing sides

It was a setting I found myself in. You see, there is nothing wrong with bashing Microsoft. The question at times is how long until the bashing is no longer a civic duty, but personal pleasure. As such I started reading the article (at https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.70697010) where we see ‘New York Times sues OpenAI, Microsoft for copyright infringement’ it is there where we are given a few part. The first that caught my eye was ““Defendants seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” according to the complaint filed Wednesday in Manhattan Federal Court.” To see why I am (to some extent) siding with Microsoft on this is that a newspaper is only in value until it is printed. At that point it becomes public domain. Now the paper has a case when you consider the situation that someone is copying THEIR result for personal gain. Yet, this is not the case here. They are teaching a machine learning model to create new work. Consider that this is not an easy part. First the machine needs to learn ALL the articles that a certain writer has written. So not all the articles of the New York Times. But separately the articles from every writer. Now we could (operative word) to a setting where something alike is created on new properties, events that are the now. So that is no longer a copy, that is an original created article in the style of a certain writer. 

As such when we see the delusional statement from the New York Times giving us “The Times is not seeking a specific amount of damages, but said it believes OpenAI and Microsoft have caused “billions of dollars” in damages by illegally copying and using its works.” Delusional for valuing itself billions of dollars whilst their revenue was a lot less than a billion dollars. Then there is the other setting. Is learning from public domain a crime? Even if it includes the articles of tomorrow, is it a crime then? You see, the law is not ready for machine learning algorithm. It isn’t even ready for the concept of machine learning at present. 

Now, this doesn’t apply to everything. Newspapers are the vocalisations of fact (or at least used to be). The issues on skating towards design patents is a whole other mess. 

As such OpenAi and Microsoft are facing an uphill battle, yet in the case of the New York Times and perhaps the Washington Post and the Guardian I am not so sure. You see, as I see it, it hangs on one simple setting. Is a published newspaper to be regarded as Public Domain? The paper is owned, as such these articles cannot be resold, but there is the grinding cog. It was never used as such. It was a learning model to create new original work and that is a setting newspapers were never ready for. None of these media laws will give coverage on that setting. This is probably why the NY Times is crying foul by the billions. 

The law in these settings is complex, but overall as a learning model I do not believe the NY Times has a case. and I could be wrong. My setting is that articles published become public domain to some degree. At worst OpenAI (Microsoft too) would need to own one copy of every newspaper used, but that is as far as I can go. 

The dangers here is not merely that this is done, it is “often taken from the internet” this becomes an exercise on ‘trust but verify’. There is so much fake and edited materials on the internet. One slip up and the machine learning routines fail. So we see not merely the writer. We see writer, publication, time of release, path of release, connected issues, connected articles all these elements hurt the machine learning algorithm. One slip up and it is back to the drawing board teaching the system often from scratch.

And all that is before we consider that editors also change stories and adjust for length, as such it is a slightly bigger mess than you consider from the start. To see that we need to return to June this year when we were given “The FTC is demanding documents from Open AI, ChatGPT’s creator, about data security and whether its chatbot generates false information.” If we consider the impact we need to realise that the chatbot does not generate false information, it was handed wrong and false information from the start the model merely did what the model was given. That is the danger. The operators and programmers not properly vetting information.

Almost the end of the year, enjoy.

Leave a comment

Filed under IT, Law, Media, Science

Presentations by media jokes

It happens at times. Whilst we think that corporations are playing us, we are all being played by the media. The media and corporations hand in hand deceiving us all for a simple percentage. That is the feeling I have had for plenty of times, but this one (my speculated view) is just too opportune to ignore. So lets show you what I have and you can decide for yourself.

Part one
The first part is the story we have seen over the last 2-3 days. This version (at https://www.forbes.com/sites/alexkonrad/2023/11/20/sam-altman-will-not-return-as-ceo-of-openai/) is used as the other version I wanted to use (AFR) is behind a paywall. We see here ‘Sam Altman Will Not Return As CEO Of OpenAI’ with the added text “Supporters of Altman led by Microsoft and including investors and key employees had pressured OpenAI’s board of directors to take back Altman, or face the widespread resignation of OpenAI’s researchers and withdrawal of Microsoft’s support”. At this point three questions come to mind but I will hold off until a little later, it makes things a lot more clear. As such we see one corporation ‘cleaning’ its management setting, but ponder on those settings a little longer

Part two
The second part came hours later, but now we have a very strong defining place with ‘Microsoft hires former OpenAI CEO Sam Altman’ (at https://www.theguardian.com/technology/2023/nov/20/sam-altman-openai-ceo-wont-return-chatgpt-talks-fail-emmett-shear-twitch) with the added “Microsoft has hired Sam Altman as head of a new advanced artificial intelligence team after attempts to reinstate him as chief executive of OpenAI failed.” At this point a few questions should emerge, but we are about to go into that part. 

Part three
This comes when we consider “At the end of a dramatic weekend of boardroom drama, the non-profit board of the San Francisco-based OpenAI has installed Emmett Shear, the co-founder of video streaming site Twitch, as the company’s third CEO in three days

Part four
The questions that should come to mind are

  1. OpenAI is ruffle feathers when it is on a high in several directions?
  2. Sam Altman doesn’t have a non-compete clause?
  3. So, who is Emmett Shear, what is his expertise in presumed AI?

These three questions should have been on the mind of ALL media. OpenAI is on a high note on a hyped route towards whatever they present. But none of them did, I checked a dozen articles, they ALL overlooked issues here, so when does the media ‘overlook’ issues? We see all the emotional articles about staff resigning, about ‘demands’ in a stage where they (for now) have the upper hand. Oh and on a sideline, when you have such hyped IP, which corporation was the last place that had non-compete clauses in play, especially for players this size? 

That is beside the point on WHO became the replacement.

Part five
This is the kicker, this is the coup-de-grace of the entire equation. It is seen with Microsoft hiring Sam Altman. Microsoft now has a larger stake in a solution they wanted all along and through this media drama, they now get it a lot cheaper. So when would any player, in this case OpenAI shoot itself in the foot to this degree? We see now that ‘Weekend of OpenAI drama ends in a Microsoft coup’, ‘Microsoft Emerges as the Winner in OpenAI Chaos’ and ‘OpenAI’s leadership moves to Microsoft, propelling its stock up’, yes presentations by the media. The media used as the bitch of Microsoft and it is shown through questions that were clearly out in the open. Microsoft stock up and OpenAI becomes part of Microsoft for billions less. One could say (and I would not disagree) that this was a lovely play to reduce billions in tax payments and the media let it happen. All solutions that were clearly on the papers where ever you looked when you decided to seek for the right answers. As I personally see it, the media is simply the bitch of corporations and they all let it happen, all pushing the tax offices down the river in a canoe without a paddle. Well played Microsoft.

So consider what played over a weekend, consider what any corporation would do to protect its multi billion dollar value. I think that OpenAI was part of this stage from the very beginning, but that is my speculated view.

Enjoy your Monday, it’s Tuesday here.

Leave a comment

Filed under Finance, IT, Media

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

2 Comments

Filed under Finance, IT, Science

And the lesson is?

That is at times the issue and it does at times get help from people, managers mainly that belief that the need for speed rectifies everything, which of course is delusional to say the least. So, last week there was a news flash that was speeding across the retina’s of my eyes and I initially ignored it, mainly because it was Samsung and we do not get along. But then Tom’s guide (at https://www.tomsguide.com/news/samsung-accidentally-leaked-its-secrets-to-chatgpt-three-times) and I took a closer look. The headline ‘Samsung accidentally leaked its secrets to ChatGPT — three times!’ was decently satisfying. The rest “Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times” seemed icing on the cake, but I took another look at the information. You see, to all ChatGPT is seen as an artificial-intelligence (AI) chatbot developed by OpenAI. But I think it is something else. You see, AI does not exist, as such I see it as an ‘Intuitive advanced Deeper Learning Machine response system’, this is not me dissing OpenAI, this system when it works is what some would call the bees knees (and I would be agreeing), but it is data driven and that is where the issues become slightly overbearing. In the first you need to learn and test the responses on data offered. It seems to me that this is where speed driven Samsung went wrong. And Tom’s guide partially agrees by giving us “unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks” and this response gives me another thought. Whomever owns OpenAI is setting a data driven stage where data could optionally be captured. More important the NSA and likewise tailored organisations (DGSE, DCD et al) could find the logistics of these accounts, hack the cloud and end up with TB’s of data, if not Petabytes and here we see the first failing and it is not a small one. Samsung has been driving innovation for the better part of a decade and as such all that data could be of immense value to both Russia and China and do not for one moment think that they are not all over the stage of trying to hack those cloud locations. 

Of course that is speculation on my side, but that is what most would do and we don’t need an egg timer to await actions on that front. The final quote that matters is “after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment” and it might be merely three employees. Yet in that case the party line failed, management oversight failed and Common Cyber Sense was nowhere to be seen. As such there is a failing and I am fairly certain that these transgressions go way beyond Samsung, how far? No one can tell. 

Yet one thing is certain. Anyone racing to the ChatGPT tally will take shortcuts to get there first and as such companies will need to reassure themselves that proper mechanics, checks and balances are in place. The fact that deleting an account takes 4 weeks implies that this is not a simple cloud setting and as such whomever gets access to that will end up with a lot more than they bargained for.

I see it as a lesson for all those who want to be at the starting signal of new technology on day one, all whilst most of that company has no idea what the technology involves and what was set to a larger stage like the loud, especially when you consider (one source) “45% of breaches are cloud-based. According to a recent survey, 80% of companies have experienced at least one cloud security incident in the last year, and 27% of organisations have experienced a public cloud security incident—up 10% from last year” and in that situation you are willing to set your data, your information and your business intelligence to a cloud account? Brave, stupid but brave.

Enjoy the day

Leave a comment

Filed under IT, Science