Tag Archives: OpenAI

Presentations by media jokes

It happens at times. Whilst we think that corporations are playing us, we are all being played by the media. The media and corporations hand in hand deceiving us all for a simple percentage. That is the feeling I have had for plenty of times, but this one (my speculated view) is just too opportune to ignore. So lets show you what I have and you can decide for yourself.

Part one
The first part is the story we have seen over the last 2-3 days. This version (at https://www.forbes.com/sites/alexkonrad/2023/11/20/sam-altman-will-not-return-as-ceo-of-openai/) is used as the other version I wanted to use (AFR) is behind a paywall. We see here ‘Sam Altman Will Not Return As CEO Of OpenAI’ with the added text “Supporters of Altman led by Microsoft and including investors and key employees had pressured OpenAI’s board of directors to take back Altman, or face the widespread resignation of OpenAI’s researchers and withdrawal of Microsoft’s support”. At this point three questions come to mind but I will hold off until a little later, it makes things a lot more clear. As such we see one corporation ‘cleaning’ its management setting, but ponder on those settings a little longer

Part two
The second part came hours later, but now we have a very strong defining place with ‘Microsoft hires former OpenAI CEO Sam Altman’ (at https://www.theguardian.com/technology/2023/nov/20/sam-altman-openai-ceo-wont-return-chatgpt-talks-fail-emmett-shear-twitch) with the added “Microsoft has hired Sam Altman as head of a new advanced artificial intelligence team after attempts to reinstate him as chief executive of OpenAI failed.” At this point a few questions should emerge, but we are about to go into that part. 

Part three
This comes when we consider “At the end of a dramatic weekend of boardroom drama, the non-profit board of the San Francisco-based OpenAI has installed Emmett Shear, the co-founder of video streaming site Twitch, as the company’s third CEO in three days

Part four
The questions that should come to mind are

  1. OpenAI is ruffle feathers when it is on a high in several directions?
  2. Sam Altman doesn’t have a non-compete clause?
  3. So, who is Emmett Shear, what is his expertise in presumed AI?

These three questions should have been on the mind of ALL media. OpenAI is on a high note on a hyped route towards whatever they present. But none of them did, I checked a dozen articles, they ALL overlooked issues here, so when does the media ‘overlook’ issues? We see all the emotional articles about staff resigning, about ‘demands’ in a stage where they (for now) have the upper hand. Oh and on a sideline, when you have such hyped IP, which corporation was the last place that had non-compete clauses in play, especially for players this size? 

That is beside the point on WHO became the replacement.

Part five
This is the kicker, this is the coup-de-grace of the entire equation. It is seen with Microsoft hiring Sam Altman. Microsoft now has a larger stake in a solution they wanted all along and through this media drama, they now get it a lot cheaper. So when would any player, in this case OpenAI shoot itself in the foot to this degree? We see now that ‘Weekend of OpenAI drama ends in a Microsoft coup’, ‘Microsoft Emerges as the Winner in OpenAI Chaos’ and ‘OpenAI’s leadership moves to Microsoft, propelling its stock up’, yes presentations by the media. The media used as the bitch of Microsoft and it is shown through questions that were clearly out in the open. Microsoft stock up and OpenAI becomes part of Microsoft for billions less. One could say (and I would not disagree) that this was a lovely play to reduce billions in tax payments and the media let it happen. All solutions that were clearly on the papers where ever you looked when you decided to seek for the right answers. As I personally see it, the media is simply the bitch of corporations and they all let it happen, all pushing the tax offices down the river in a canoe without a paddle. Well played Microsoft.

So consider what played over a weekend, consider what any corporation would do to protect its multi billion dollar value. I think that OpenAI was part of this stage from the very beginning, but that is my speculated view.

Enjoy your Monday, it’s Tuesday here.

Leave a comment

Filed under Finance, IT, Media

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

2 Comments

Filed under Finance, IT, Science

And the lesson is?

That is at times the issue and it does at times get help from people, managers mainly that belief that the need for speed rectifies everything, which of course is delusional to say the least. So, last week there was a news flash that was speeding across the retina’s of my eyes and I initially ignored it, mainly because it was Samsung and we do not get along. But then Tom’s guide (at https://www.tomsguide.com/news/samsung-accidentally-leaked-its-secrets-to-chatgpt-three-times) and I took a closer look. The headline ‘Samsung accidentally leaked its secrets to ChatGPT — three times!’ was decently satisfying. The rest “Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times” seemed icing on the cake, but I took another look at the information. You see, to all ChatGPT is seen as an artificial-intelligence (AI) chatbot developed by OpenAI. But I think it is something else. You see, AI does not exist, as such I see it as an ‘Intuitive advanced Deeper Learning Machine response system’, this is not me dissing OpenAI, this system when it works is what some would call the bees knees (and I would be agreeing), but it is data driven and that is where the issues become slightly overbearing. In the first you need to learn and test the responses on data offered. It seems to me that this is where speed driven Samsung went wrong. And Tom’s guide partially agrees by giving us “unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks” and this response gives me another thought. Whomever owns OpenAI is setting a data driven stage where data could optionally be captured. More important the NSA and likewise tailored organisations (DGSE, DCD et al) could find the logistics of these accounts, hack the cloud and end up with TB’s of data, if not Petabytes and here we see the first failing and it is not a small one. Samsung has been driving innovation for the better part of a decade and as such all that data could be of immense value to both Russia and China and do not for one moment think that they are not all over the stage of trying to hack those cloud locations. 

Of course that is speculation on my side, but that is what most would do and we don’t need an egg timer to await actions on that front. The final quote that matters is “after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment” and it might be merely three employees. Yet in that case the party line failed, management oversight failed and Common Cyber Sense was nowhere to be seen. As such there is a failing and I am fairly certain that these transgressions go way beyond Samsung, how far? No one can tell. 

Yet one thing is certain. Anyone racing to the ChatGPT tally will take shortcuts to get there first and as such companies will need to reassure themselves that proper mechanics, checks and balances are in place. The fact that deleting an account takes 4 weeks implies that this is not a simple cloud setting and as such whomever gets access to that will end up with a lot more than they bargained for.

I see it as a lesson for all those who want to be at the starting signal of new technology on day one, all whilst most of that company has no idea what the technology involves and what was set to a larger stage like the loud, especially when you consider (one source) “45% of breaches are cloud-based. According to a recent survey, 80% of companies have experienced at least one cloud security incident in the last year, and 27% of organisations have experienced a public cloud security incident—up 10% from last year” and in that situation you are willing to set your data, your information and your business intelligence to a cloud account? Brave, stupid but brave.

Enjoy the day

Leave a comment

Filed under IT, Science