Tag Archives: OpenAI

Reengineering an old solution

I was bending my mind over backwards to stay creative. And as I was mulling over something I read a year ago, my mind started to race towards an optional solution. You see, the idea is not novel but it has been forgotten. So if Tandon never renewed their patent, you get the exclusive option to rule there. If they have, you could file for an innovative patent, giving you still a decent payment for your trouble. 

Going back 34 years
Yes, it was the height of the IT innovative time and this age had plenty of failures, but it also had decent blockbusters and whilst they all wanted to rule the world, they clamped down on their IP innovations. Tandon was one of those.

As you can see in this image the drives (both of them) look like space hoarders, it was the age of Seagate with their 20MB or 30MB drives. The nice part was that these drives could be ejected. It was a novel idea where the CFO could put its drive with the books in the vault.  

Why is this an issue?
Well, last year I saw an article that well over 70% of all cloud accounts were invaded on. To see this we need to look (at https://www.cybersecuritydive.com/news/cloud-intrusions-spike-crowdstrike/708315/) where we see ‘Cloud intrusions spiked 75% in 2023, CrowdStrike says’ it comes with the text “Organisations with weak cloud security controls and gaps in cross-domain visibility are getting outmanoeuvred by threat actors and struck by intrusions” And this is not all. Captains of industry lacking IT knowledge will happily accept that free 1TB USB drive at a trade show, not realising that it also creates a backdoor on their servers. They shouldn’t be too upset, it happened to a few people at the Pentagon as well (as they are supposed to know what they are doing). So the cloud is a failing setting of security. So consider that, as well as Samsung putting their stuff online because they didn’t realise how to operate OpenAI. Just a few examples. So what is to stop their research or revenue results to be placed on a drive like the pre-cloud days?

You think I would put my IP in the cloud? Actually I did, but I have a rather nasty defence system that is a repeated action I learned in 1988 and no one has a clue where to look (and I never put it with the usual suspects), but this is me and I will not give you that trick because all kinds of people read my blog. 

So back to Tandon. In stead of this big drive, consider a normal drive space and in stead of that big box. Consider a tray with enough space to fit an SDD with the connector inside the tray, going to a plug on the outside of the tray. With a simple kit that can be purchased if more than one drive is used. Now see the Tandon solution as it could be. An ejectable drive solution for many. Yes you can connect just a wire and use an external SSD, but it becomes messy and these wires can also malfunction. There is even the option of adding AES256 that could be added in the drive on one side, so even if they steal the drive (optionally with computer) the thieves lose out as a dongle could be required. It merely depends on how secure you want the data to be. A CFO might rely on his safe for the books. An IP research post might need more security. So consider if you want to be the optional victim staged in the 75%, or do you need your data to be secure. 

So whomever take the idea and reengineer it (with optional extras), you are welcome and have a nice day. I just completed 12.5% of Monday, time to snore like a lumberjack.

Leave a comment

Filed under IT, Science

Not changing sides

It was a setting I found myself in. You see, there is nothing wrong with bashing Microsoft. The question at times is how long until the bashing is no longer a civic duty, but personal pleasure. As such I started reading the article (at https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.70697010) where we see ‘New York Times sues OpenAI, Microsoft for copyright infringement’ it is there where we are given a few part. The first that caught my eye was ““Defendants seek to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” according to the complaint filed Wednesday in Manhattan Federal Court.” To see why I am (to some extent) siding with Microsoft on this is that a newspaper is only in value until it is printed. At that point it becomes public domain. Now the paper has a case when you consider the situation that someone is copying THEIR result for personal gain. Yet, this is not the case here. They are teaching a machine learning model to create new work. Consider that this is not an easy part. First the machine needs to learn ALL the articles that a certain writer has written. So not all the articles of the New York Times. But separately the articles from every writer. Now we could (operative word) to a setting where something alike is created on new properties, events that are the now. So that is no longer a copy, that is an original created article in the style of a certain writer. 

As such when we see the delusional statement from the New York Times giving us “The Times is not seeking a specific amount of damages, but said it believes OpenAI and Microsoft have caused “billions of dollars” in damages by illegally copying and using its works.” Delusional for valuing itself billions of dollars whilst their revenue was a lot less than a billion dollars. Then there is the other setting. Is learning from public domain a crime? Even if it includes the articles of tomorrow, is it a crime then? You see, the law is not ready for machine learning algorithm. It isn’t even ready for the concept of machine learning at present. 

Now, this doesn’t apply to everything. Newspapers are the vocalisations of fact (or at least used to be). The issues on skating towards design patents is a whole other mess. 

As such OpenAi and Microsoft are facing an uphill battle, yet in the case of the New York Times and perhaps the Washington Post and the Guardian I am not so sure. You see, as I see it, it hangs on one simple setting. Is a published newspaper to be regarded as Public Domain? The paper is owned, as such these articles cannot be resold, but there is the grinding cog. It was never used as such. It was a learning model to create new original work and that is a setting newspapers were never ready for. None of these media laws will give coverage on that setting. This is probably why the NY Times is crying foul by the billions. 

The law in these settings is complex, but overall as a learning model I do not believe the NY Times has a case. and I could be wrong. My setting is that articles published become public domain to some degree. At worst OpenAI (Microsoft too) would need to own one copy of every newspaper used, but that is as far as I can go. 

The dangers here is not merely that this is done, it is “often taken from the internet” this becomes an exercise on ‘trust but verify’. There is so much fake and edited materials on the internet. One slip up and the machine learning routines fail. So we see not merely the writer. We see writer, publication, time of release, path of release, connected issues, connected articles all these elements hurt the machine learning algorithm. One slip up and it is back to the drawing board teaching the system often from scratch.

And all that is before we consider that editors also change stories and adjust for length, as such it is a slightly bigger mess than you consider from the start. To see that we need to return to June this year when we were given “The FTC is demanding documents from Open AI, ChatGPT’s creator, about data security and whether its chatbot generates false information.” If we consider the impact we need to realise that the chatbot does not generate false information, it was handed wrong and false information from the start the model merely did what the model was given. That is the danger. The operators and programmers not properly vetting information.

Almost the end of the year, enjoy.

Leave a comment

Filed under IT, Law, Media, Science

Presentations by media jokes

It happens at times. Whilst we think that corporations are playing us, we are all being played by the media. The media and corporations hand in hand deceiving us all for a simple percentage. That is the feeling I have had for plenty of times, but this one (my speculated view) is just too opportune to ignore. So lets show you what I have and you can decide for yourself.

Part one
The first part is the story we have seen over the last 2-3 days. This version (at https://www.forbes.com/sites/alexkonrad/2023/11/20/sam-altman-will-not-return-as-ceo-of-openai/) is used as the other version I wanted to use (AFR) is behind a paywall. We see here ‘Sam Altman Will Not Return As CEO Of OpenAI’ with the added text “Supporters of Altman led by Microsoft and including investors and key employees had pressured OpenAI’s board of directors to take back Altman, or face the widespread resignation of OpenAI’s researchers and withdrawal of Microsoft’s support”. At this point three questions come to mind but I will hold off until a little later, it makes things a lot more clear. As such we see one corporation ‘cleaning’ its management setting, but ponder on those settings a little longer

Part two
The second part came hours later, but now we have a very strong defining place with ‘Microsoft hires former OpenAI CEO Sam Altman’ (at https://www.theguardian.com/technology/2023/nov/20/sam-altman-openai-ceo-wont-return-chatgpt-talks-fail-emmett-shear-twitch) with the added “Microsoft has hired Sam Altman as head of a new advanced artificial intelligence team after attempts to reinstate him as chief executive of OpenAI failed.” At this point a few questions should emerge, but we are about to go into that part. 

Part three
This comes when we consider “At the end of a dramatic weekend of boardroom drama, the non-profit board of the San Francisco-based OpenAI has installed Emmett Shear, the co-founder of video streaming site Twitch, as the company’s third CEO in three days

Part four
The questions that should come to mind are

  1. OpenAI is ruffle feathers when it is on a high in several directions?
  2. Sam Altman doesn’t have a non-compete clause?
  3. So, who is Emmett Shear, what is his expertise in presumed AI?

These three questions should have been on the mind of ALL media. OpenAI is on a high note on a hyped route towards whatever they present. But none of them did, I checked a dozen articles, they ALL overlooked issues here, so when does the media ‘overlook’ issues? We see all the emotional articles about staff resigning, about ‘demands’ in a stage where they (for now) have the upper hand. Oh and on a sideline, when you have such hyped IP, which corporation was the last place that had non-compete clauses in play, especially for players this size? 

That is beside the point on WHO became the replacement.

Part five
This is the kicker, this is the coup-de-grace of the entire equation. It is seen with Microsoft hiring Sam Altman. Microsoft now has a larger stake in a solution they wanted all along and through this media drama, they now get it a lot cheaper. So when would any player, in this case OpenAI shoot itself in the foot to this degree? We see now that ‘Weekend of OpenAI drama ends in a Microsoft coup’, ‘Microsoft Emerges as the Winner in OpenAI Chaos’ and ‘OpenAI’s leadership moves to Microsoft, propelling its stock up’, yes presentations by the media. The media used as the bitch of Microsoft and it is shown through questions that were clearly out in the open. Microsoft stock up and OpenAI becomes part of Microsoft for billions less. One could say (and I would not disagree) that this was a lovely play to reduce billions in tax payments and the media let it happen. All solutions that were clearly on the papers where ever you looked when you decided to seek for the right answers. As I personally see it, the media is simply the bitch of corporations and they all let it happen, all pushing the tax offices down the river in a canoe without a paddle. Well played Microsoft.

So consider what played over a weekend, consider what any corporation would do to protect its multi billion dollar value. I think that OpenAI was part of this stage from the very beginning, but that is my speculated view.

Enjoy your Monday, it’s Tuesday here.

Leave a comment

Filed under Finance, IT, Media

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

2 Comments

Filed under Finance, IT, Science

And the lesson is?

That is at times the issue and it does at times get help from people, managers mainly that belief that the need for speed rectifies everything, which of course is delusional to say the least. So, last week there was a news flash that was speeding across the retina’s of my eyes and I initially ignored it, mainly because it was Samsung and we do not get along. But then Tom’s guide (at https://www.tomsguide.com/news/samsung-accidentally-leaked-its-secrets-to-chatgpt-three-times) and I took a closer look. The headline ‘Samsung accidentally leaked its secrets to ChatGPT — three times!’ was decently satisfying. The rest “Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times” seemed icing on the cake, but I took another look at the information. You see, to all ChatGPT is seen as an artificial-intelligence (AI) chatbot developed by OpenAI. But I think it is something else. You see, AI does not exist, as such I see it as an ‘Intuitive advanced Deeper Learning Machine response system’, this is not me dissing OpenAI, this system when it works is what some would call the bees knees (and I would be agreeing), but it is data driven and that is where the issues become slightly overbearing. In the first you need to learn and test the responses on data offered. It seems to me that this is where speed driven Samsung went wrong. And Tom’s guide partially agrees by giving us “unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks” and this response gives me another thought. Whomever owns OpenAI is setting a data driven stage where data could optionally be captured. More important the NSA and likewise tailored organisations (DGSE, DCD et al) could find the logistics of these accounts, hack the cloud and end up with TB’s of data, if not Petabytes and here we see the first failing and it is not a small one. Samsung has been driving innovation for the better part of a decade and as such all that data could be of immense value to both Russia and China and do not for one moment think that they are not all over the stage of trying to hack those cloud locations. 

Of course that is speculation on my side, but that is what most would do and we don’t need an egg timer to await actions on that front. The final quote that matters is “after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment” and it might be merely three employees. Yet in that case the party line failed, management oversight failed and Common Cyber Sense was nowhere to be seen. As such there is a failing and I am fairly certain that these transgressions go way beyond Samsung, how far? No one can tell. 

Yet one thing is certain. Anyone racing to the ChatGPT tally will take shortcuts to get there first and as such companies will need to reassure themselves that proper mechanics, checks and balances are in place. The fact that deleting an account takes 4 weeks implies that this is not a simple cloud setting and as such whomever gets access to that will end up with a lot more than they bargained for.

I see it as a lesson for all those who want to be at the starting signal of new technology on day one, all whilst most of that company has no idea what the technology involves and what was set to a larger stage like the loud, especially when you consider (one source) “45% of breaches are cloud-based. According to a recent survey, 80% of companies have experienced at least one cloud security incident in the last year, and 27% of organisations have experienced a public cloud security incident—up 10% from last year” and in that situation you are willing to set your data, your information and your business intelligence to a cloud account? Brave, stupid but brave.

Enjoy the day

Leave a comment

Filed under IT, Science