Tag Archives: ChatGPT

The case file of linked technologies

That was the setting I was in yesterday. I love linked technologies. My first real interaction was connecting my Gameboy Advance to my Gamecube and in a game, the game boy was the map to the game I was playing on the Gameboy. This was neat (dorky but accurate). In the past I wrote about parts of this, but in a slightly different setting. In this case I am in the process of remastering IP (making it new or optionally innovative IP). The stage is that a game (or games) use a connection to something like ChatGPT to create case files based on writing styles like Chandler, Le Carre (an essential writer in my view), Desmond Bagley and Alistair McLean. That setting as the case file is merely a short story will not impede on the original writers. 

So why does this matter?
You see, games tend to have the EXACT SAME narrative. This is not on the games, but evolution is where you could create it. You see, even as Restoration has some alterations towards the narrative, this game requires a different approach to be a bigger hit. You see, the group of people who are gamers and are also bookworms (or enthusiast readers) is rather large. Another cluster that Amazon, Google and Microsoft missed. As such Amazon with the Luna and Kindle will have an advantage. That is until Tencent Technologies creates such a setting, or partners with Alibaba or Amazon to do the same thing. You see, what happens when a game you love creates (through ChatGPT, or an alike) create a case-file (read: narrative) that you can read and send to your friends, or place on your profile so that others can read these narratives. That makes the ChatGPT essential. Thousands of case files, similar but not exact copies. That create new waves, new interactions and new fans. All options that the larger three missed (a few times over). Now we get the narrative to a remaster all missed. 

You see, streaming games need to evolve and bring more to the game. They will never replace the Nintendo or the Sony consoles, but they will be a brother to the other two and that is where the larger gains can be made and that is where I am looking and the larger three are all missing the boat. Well, Google dumped the Stadia, so they aren’t even in the game anymore. But the larger setting with Kindle can create a double whammy, especially when you consider how small some margins are, that sets up all kinds of new connections and create new evolutions in gaming and I am all about evolving gaming, as I get better or more inclusive games, me, myself, I and all other gamers win and winning is the marker we all accept.

All innovative directions the big three either ignored, rejected or never saw and it is not about the Kindle. You could set this to a PDF. The setting is that you add to any profile to make the profile more, not more advertising, but more profile we all win and that is the second tier of creating waves. Let the game push all sides of gaming, not merely the game, or the narrative. As I personally see it another side ignored by the two remaining players (Amazon and Tencent Technologies). Now to be fair Tencent is new to this, but they are more and more in a position to take up a massive chunk of gaming marketshare and if they do it well, it is fine by me. I as a gamer win (other gamers too) and that is what I am after. More and better games, not Microsoft or Ubisoft iterations, but more and better games. 

So whilst we see iteration after iteration, gamers hunger for more and that time is already now. So, lets see what time and innovation will be brought to gamers and readers alike.

Enjoy the day before Friday.

Leave a comment

Filed under Gaming, IT

Evolution is essential

You might not realise it, but it is. Gaming evolution is on the forefront of my mind, because that is how we push the limits of gaming. Not by buying it (Microsoft anyone), but by creating new frontiers in games. For the longest of times it has been on my mind, mainly because streaming is the next evolution, not the the PS6 (I love my PS5), not any system, but the evolution of an architecture. Some might say that Alan Wake 2 is the new frontier, but it is not. It looks great, awesome and it pushes boundaries unlike any game this year (not Spiderman 2, and I love the first one). But frontiers is where it is. It is in that mindset that I took a sentimental journey. You see, if there is one side that does seemingly not evolve it is the story. The story is too often set in stone. But what if that was not the case? What if the evolution of any story is next? It is there that ChatGPT might have an option (an option, not a given). Consider Emperor of the North (1973) where you have to survive a train ride as a hobo. But that would be too two dimensional. Trains have been the setting of many movies. Silver Streak, Unstoppable, Pelham 123, Runaway Train and that lis goes on. There was Strangers on a train. Now consider that you (as a time traveller, which is my easy way out) need to survive a whole onslaught of train trips, but the setting of you changes with EVERY train. So you get the red wire across all trains and every train has its own goals. Complete that and you get the clue for the red wire. Now we add salt and pepper. The order of trains changes with every life you lose. You start from scratch and that sounds frustrating, but gaming is not a vanilla setting of happiness. It gives you an achievable goal and a obstruction to pass. You see, this would require some serious story programming. The other part is that YOUR role on the second visit to that same train could be different (Murder on the orient express) and that is how evolution comes into play. I want a new setting of stealth and casual gaming, a new setting of melee, stealth and casual gaming easing people from role to role. Now consider how to create this storyline and with streaming ChatGPT (or an alike alternative like bard) becomes an option and it is something gamers have NEVER faced before. The story remained mostly the same. So what happens when we take that away and create a story on a shifty changing narrative? That is where streaming gaming has the advantage over ALL other gaming and as I see it, it is not used. Not on the Luna, and unlikely on the Tencent handheld and that I what could set these two apart from all others. Giving gamers something they never faced before. 

So what do you do to create this? I used a previous example using a matrix founded on Sudoku, but that was merely one example. You see Sudoku has 6,670,903,752,021,072,936,960 options. You cannot draw them all, but you can use such an engine to create something new, something never seen before, and those trillions are more than random, it is a setting of never ending uniqueness. The idea that two gamers playing the same game get very different stages should be overwhelming showing us who the gamer is and who is the read the solution online achiever. The idea of how to switch between lives comes to mind and the support system (something like Quantum Leap) is also coming into vision, but that is nothing compared to the story. And it sounds like fun to make this a story about Hollywood. A story of intrigue, sex (I am here Olivia Wilde) 😉 and greed. Hollywood without greed is not Hollywood. What if the underlying story is a rogue AI, the rogue AI is interacting with all other systems and you need to find the evidence that the AI is rogue so that the media DETACHES from it, and with that the other AI’s. The AI took the train to push its own narrative as it was a mobile system on tracks, but that is the delusion and you as the player needs to find the clues that leads to the evidence and give that to the world (a wink to A mind forever voyaging by Infocom). We are the gamers through what was and Infocom was important at one stage, it created more than Zork and gave us gaming, pushed us into new frontiers and now we get a much larger frontier. It is only natural that streaming leads that way and we should always remember where we came from.

Just a thought as Friday is about to start for me, the rest of you can follow later. Enjoy whatever day you are in.

Leave a comment

Filed under Gaming, IT, Science

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

1 Comment

Filed under Finance, IT, Science

One plus one makes 256

I got struck by two things today. The first was given to me by the BBC. There (at https://www.bbc.co.uk/news/business-66021325) we are given something that should not be allowed to happen. We are given ‘Shell still trading Russian gas despite pledge to stop’ this has one part that offends me. You see it is the Royal Dutch Shell. The Dutch Royal family has a majority stake in this and we all agree that we do not under any circumstance support the Russians in their endeavour. In addition, Royal Dutch Shell is not alone. Dozens of American firms are still making money from Russia and allowing them to continue their acts of terror against civilian targets. I am all royalist, yet when something wrong is done I speak out, the fact that the BBC is extremely willing to drop the ‘Royal Dutch’ part in this equation speaks out against the BBC and their setting of informing the public (yet again). In addition to this we are given. “Shell said the trades were the result of “long-term contractual commitments” and do not violate laws or sanctions.” And when was war a reason not to break a contract? How long have certain corporations been doing business with Idi Amin Dada Oumee in the timeframe of 1971-1979? Do they not learn? I think this is the first time I ever speak out against the Dutch Royal family, but this time I see no other option but to speak out. And when we get to “Oleg Ustenko, an adviser to Ukrainian President Vladimir Zelensky, accused Shell of accepting “blood money”” I personally would agree with Oleg Ustenko. And with “Last year Shell accounted for 12% of Russia’s seaborne LNG trade, Global Witness calculates, and was among the top five traders of Russian-originated LNG that year” we see just how deep Royal Dutch Shell is connected to all this. 

Yet what you just read is not correct, and I did that intentionally. You see we also have “In January 2022, the firm merged the A and B shares, moved its headquarters to London, and changed its legal name to Shell plc.” So what is the UK doing? You see, Shell is seen as the 15th largest company in the world. You do not give up that position lightly or cheap. So whatever happened in January 2022 has had a massive impact and for some reason no one really knows what was going on (I have no clue), but me separating with ownership of a firm that big is a ‘no no’, so something does not add up to me, would you just shed a company that makes $20 billion a year? I have issues with all this and yes the BBC did nothing wrong, but the fact that this was once the Royal Dutch Shell and there is no indication (does not mean it did not happen) that the Dutch Royal family might still have a large stake in all this is upsetting to me and it would be to anyone having Dutch links. 

So as we say goodbye to that part, we get to the interesting dream I had. I dozed off whilst watching the Rookie (season 4). My dream (or nightmare) took me to Los Angeles and an interesting Terrorist plot to create and unsurmountable amount of chaos to that city. You see, with all the connected and interacting systems someone created an interesting virus/worm/program (not sure which one). This work was pretty ingenious. You see, instead of debilitating IT systems, they did something different. They infected data parsers. In my dream I was hit as I wanted to find places that had in part the term “vectium” and suddenly it all stopped. Systems worked by they were no longer able to give the full details sudden intelligent settings in Google Search, Bing (yes that one too), and all other engines failed because certain subsystems were deactivated and for some reason some version of ChatGPT was merely making matters worse and spreading the problem across the US and hitting the other continents merely hours later. Because certain detection matters were limited to certain main parts and not subparts that damage continued. The weird part was that anyone with IT knowledge and the ability to give complete correct search terms could still work, but well over 200,000,000 people suddenly had mobiles and IT systems that would no longer connect or hand over correct information, like some kind of aphasia. The dream is now fading and I can no longer see the specifics, but at the beginning it had something to do with search terms ‘like’, which then infected more and more systems. After a short time terms like ‘containing’ would stop working and even as the complete old version SQL string would work, it was about the only thing that did and it crippled the metropolitan areas of the US (and Canada shortly thereafter). The more I think about it, the more interesting it would be to set an episode of the Rookie where infrastructures collapse. You see, people are nice when they have their coffee and their hamburger (or cheeseburger), when that stops the niceties do too.

Well that is it for me, for all you others, the end of the weekend is now no more than 19 hours away, make them count and have a lovely day.

Leave a comment

Filed under Finance, IT, Politics, Stories

Prototyping rhymes with dotty

This is the setting we faced when we see ‘ChatGPT: US lawyer admits using AI for case research’ (at https://www.bbc.com/news/world-us-canada-65735769). You see as I have stated before, AI does not yet exist. Whatever is now is data driven, unverified data driven no less, so even in machine learning and even deeper machine learning data is key. So when I read “A judge said the court was faced with an “unprecedented circumstance” after a filing was found to reference example legal cases that did not exist.” I see a much larger failing. You might see it too when you read “The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward. But the airline’s lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.” You see, a case reference is ‘12-10576 – Worlds, Inc. v. Activision Blizzard, Inc. et al’. This is not new, it has been a case for decades, so when we take note of “the airline’s lawyers later wrote to the judge to say they could not find several of the cases” we can tell that the legal team of the man is screwed. You see they were unprepared as such the airline wins. A simple setting, not an unprecedented circumstance. The legal team did not do its job and the man could sue his own legal team now. As well as “Mr Schwartz added that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”.” The joke is close to complete. You see a law student learns in his (or her) first semester what sources to use. I learned that Austlii and Jade were the good sources, as well as a few others. The US probably has other sources to check. As such relying on ChatGPT is massively stupid. It does not has any record of courts, or better stated ChatGPT would need to have the data on EVERY court case in the US and the people who do have it are not handing it out. It is their IP, their value. And until ChatGPT gets all that data it cannot function. The fact that it relied on non-existing court cases implies that the data is flawed, unverified and not fit for anything. Like any software solution 2-5 years before it hits the Alpha status. And that legal team is not done with the BS paragraph. We see that with “He has vowed to never use AI to “supplement” his legal research in future “without absolute verification of its authenticity”.” Why is it BS? He used supplement in the first, which implies he had more sources and the second is clear, AI does not (yet) exist. It is a sales hype for lazy sales people who cannot sell Machine Learning and Deeper Machine Learning. 

And the screw ups kept on coming. With “Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT. “Is varghese a real case,” reads one message, referencing Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find. ChatGPT responds that yes, it is – prompting “S” to ask: “What is your source”.

After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.” The natural question is the verification part to check Westlaw and LexisNexis which are real and good sources. So either would spew out the links with searches like ‘Varghese’ or ‘Varghese v. China Southern Airlines Co Ltd’, with saved links and printed results. Any first year law student could get you that. It seems that this was not done. This is not on ChatGPT, this is on lazy researchers not doing their job and that is clearly in the limelight here. 

So when we get to “Both lawyers, who work for the firm Levidow, Levidow & Oberman, have been ordered to explain why they should not be disciplined at an 8 June hearing.” I merely wonder whether they still have a job after that and I reckon that it is plainly clear no one will ever hire them again. 

So how does prototyping rhyme with dotty? It does not, but if you rely on ChatGPT you should have seen that coming a mile away. 

Enjoy your first working day after the weekend.

1 Comment

Filed under IT, Law, Media, Science

Indecisive and on the fence

I was on the fence for part of the day. You see, I saw (by chance) a review of a game names Redfall and it was bad, like burning down your house whilst making French fries is a good day, it was THAT bad. Initially I ignored it, because haters will be haters. I hate Microsoft, but I go by evidence, not merely my gut feeling or my emotions. So a little later I got to be curious, you see the game was supposed to be released a day ago and I dumped my Xbox One and it is an exclusive, so I couldn’t tell. As such I looked a few reviews and they were all reviews of a really bad game. It now nagged at me and Forbes (at https://www.forbes.com/sites/paultassi/2023/05/03/redfalls-failure-is-microsofts-failure/) completed the cycle. There we see ‘Redfall’s Failure Is Microsoft’s Failure’ with “Redfall reviews are in, and they are terrible. What could have and should have been another hit from Arkane, maker of the excellent Dishonored, Prey and Deathloop, is instead what may be the worst AAA release in recent memory” and it does not end there. We also get “two hours in, I understand the poor reviews and do not understand the handful of good ones. This is a deeply, strangely bad game, so much so that I truly don’t understand how it was released at all in this state” and that is the start of a collapsing firm forced to focus outside of their comfort zone and the fun part (for me) is that it was acquired by Microsoft for billions. So we are on track to make that wannabe company collapse by December 2026. I added my IP for developers exclusively for Sony and Amazon could help, but the larger stage is that Microsoft is more and more becoming its own worst enemy. Yet, I do not rely on that alone. Handing some of my IP to Tencent Technologies will help. Sony is making them sweat but I cannot rely on Amazon with its Luna, as such Tencent technologies is required to make streaming technologies a failure for Microsoft too. So whilst we mull “we are left with now-goofy-sounding tweets from Phil Spencer announcing last year’s delay, saying that they will release these “great games when they are ready.” Redfall was not ready. And given what’s here, I’m not sure it ever was going to be.” I personally feel they were not, but they did something else, something worse. It was tactically sounds, it really was, but they upset the gaming community. They took away the little freedom gamers had and now we are all driven to make Microsoft fail, whether it is via Amazon, or we will engage with new players like Tencent Technology and add to the spice of Sony, but Microsoft will pay and now it becomes even better, they now have a massive failure for a mere $7,000,000,000 not a bad deal (well for Phil Spencer it is) and that is not the end of the bad news. As Tencent accepts my idea they will create an almost overnight growth towards a $5 billion a year market and they will surpass the Microsoft setting with 50 million subscriptions in the first phase, how far it will go, I honestly cannot tell, but when the dust settles we will enter 2026 with Microsoft dead last in the console war and in the streaming war and that was merely the beginning. They lost the tablets war already, they will lose ‘their’ edge war and ChapGPT will not aid them, a loser on nearly every front. That is what happens when you piss of gamers. To be honest I never had any inkling of interest in doing what I do now, but Microsoft made me in their own warped way and Bethesda because of it will lose too. They will soon have contenders in fields they were never contested before and this failure (Redfall) will hurt them more than they realise.

Leave a comment

Filed under Finance, Gaming, Media, Science

The choice of options

Part of this started yesterday when I saw a message pass by. I ignored it because it seemed trivial, yet today ( a few hours ago) I took notice of ‘Google rushes to develop AI search engine after Samsung considers ditching it for Bing’ from ZDNet (at https://www.zdnet.com/article/google-rushes-to-develop-ai-search-engine-after-samsung-considers-ditching-it-for-bing/) and ‘Alphabet shares fall on report Samsung may switch search to Bing’ (at https://www.aljazeera.com/economy/2023/4/17/alphabet-shares-fall-on-report-samsung-may-switch-search-to-bing). In part I do not care, actually this situation is a lot better for Google than they think it is. You see, Samsung, a party I disliked for 33 years, after being massively wronged by them. Decided to make the fake AI jump. It is fake as AI does not exist and when the people learn this the hard way, it will work out nicely for Huawei and Google. There is nothing like a dose of reality being served like a bucket of ice water to stop consumers looking at your product. I do not care, I refuse any Samsung device in my apartment. I also dislike Bing, it is a Microsoft product and two years ago I got Bing forced down my throat again and again through hijack scripts, it took some time blocking them. So I dislike both. I have no real opinion of ChatGPT. As we see the AI reference. Let’s take you to the Conversation (at https://theconversation.com/not-everything-we-call-ai-is-actually-artificial-intelligence-heres-what-you-need-to-know-196732) I have said it before and they have a decent explanation. They write “AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.” You see, I only look at AGI, the rest is some narrow niche for specific purpose. We are also given “Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition or social recommendations, for example” and there is an issue with this. People do not understand the narrow scope, they want to apply it almost everywhere and that is where people get into trouble, the data connected does not support the activity and adding this to a mobile means that it collects massive amounts of data, or it becomes less and less reliable, an issue I expect to see soon after it makes it into a Samsung phone. 

For AI to really work “it needs high-quality, unbiased data, and lots of it. Researchers building neural networks use the large data sets that have come about as society has digitised.” You see, the amount of data is merely a first issue, the fact that it is unbiassed data is a lot harder and when we see sales people cut corners, they will take any shortcut making the data no longer unbiassed and that is where it all falls apart.

So whilst the ‘speculators’ (read: losers) make Google lose value, the funny part is that when the Samsung connection falls down Google stands to up their customer base by a lot. Thousands of Samsung customers feeling as betrayed as I was in 1990 and they will seek another vendor which would make Huawei equally happy. 

ZDNet gives us “The threat of Bing taking Google’s spot on Samsung phones caused “panic” at Google, according to messages reviewed by The New York Times. Google’s contract with Samsung brings in an approximate $3 billion annual revenue. The company still has a chance to maintain its presence in Samsung phones, but it needs to move fast” I see two issues here, the first is that the NY Times is less and less of a dependable source, they have played too many games and as ‘their’ source’ might not be reliable, as such is the quote also less reliable. The second source is me (basically) they weren’t interested in my 5 billion revenue, as such why would they care about losing 3 billion more? For the most, there is an upside, when it falls down (an I personally believe it will) Samsung could be brought back on board but now it will cost them 5-6 billion. As such Samsung would have to be successful without Google Search for 3 years and it will cascade into a collapse setting, after that they will beg just to return to the Alphabet fold, which would also make this Microsoft’s 6th failure. My day is looking better already.

Am I so anti-Whatever?
No not really. When it is ready and when the systems are there AI will change the game and AGI is the only real AI to consider. As I stated before deeper machine learning is awesome and it has massive value, but the narrow setting needs to be respected and when you push it into something like Bing, it will go wrong and when it does it will not be noticed initially until it is much too late. And all this is beside the setting that some people will link the wrong parts and Samsung will end up putting its IP in ChatGPT and someone will ask a specific question that was never flagged and the IP will pour straight into public domain. That is the real danger for Samsung and in all this ChatGPT is free of blame and when certain things are found the entire setting needs to be uploaded into a new account. When we consider that a script with 65,000 lines will have up to 650 issues (or features, or bugs), how many will cause a cascade effect or information no one wanted, least of all the hardware owner? Oh, and that is when the writers were really good. Normally the numbers of acceptability are between 1300-2600, as such how many issues will rise and how long until too many patches will make the system unyielding? All questions that come to mind with an ANI system, because it is data driven and when we consider that the unbiassed data isn’t? What then? And that is before we align cultural issues. Korea, India, Japan and China are merely 4 of them and seeing that things never aligned in merely 4 nations, how many versions of data will be created to avoid collapse? As such I personally think that Google is not in panic mode. Perhaps Bard made them road-wise, perhaps not. 

I think 2024 will be a great Google year with or without Samsung and when Microsoft achieves disappointing yet another company its goose will be royally cooked on both sides of the goose no less. We have choices, we have options and we can mix them, but to let some fake AI make those choices for us is not anything at all, but feel free to learn that lesson the hard way.

I never liked Samsung for personal reasons, and I have been really happy with my android phone. I have had an Android phone for 13 years now and never regretted having one. I hope it stays that way.

Enjoy the day and don’t trust an AI to tell you the weather, that is what your eyesight can do better in the present and the foreseeable future.

Leave a comment

Filed under Finance, IT, Science

Happy Hour from Hacking Hooters

Yes, that is the setting today, especially after I saw some news that made me giggle to the Nth degree. Now, lets be clear and upfront about this. Even as I am using published facts, this piece is massively speculative and uses humour to make fn of certain speculative options. If you as an IT person cannot see that, the recruitment line of Uber is taking resume’s. So here goes.

I got news from BAE Systems (at https://www.baesystems.com/en/article/bae-systems-and-microsoft-join-forces-to-equip-defence-programmes-with-innovative-cloud-technology) where we see ‘BAE Systems and Microsoft join forces to equip defence programmes with innovative cloud technology’ which made me laugh into a state of black out. You see, the text “BAE Systems and Microsoft have signed a strategic agreement aiming to support faster and easier development, deployment and management of digital defence capabilities in an increasingly data centric world. The collaboration brings together BAE Systems’ knowledge of building complex digital systems for militaries and governments with Microsoft’s approach to developing applications using its Azure Cloud platform” wasn’t much help. To see this we need to take a few sidesteps.

Step one
This is seen in the article (at https://thehackernews.com/2023/01/microsoft-azure-services-flaws-couldve.html) where we are given ‘Microsoft Azure Services Flaws Could’ve Exposed Cloud Resources to Unauthorised Access’ and this is not the first mention of unauthorised access, there have been a few. So when we see “Two of the vulnerabilities affecting Azure Functions and Azure Digital Twins could be abused without requiring any authentication, enabling a threat actor to seize control of a server without even having an Azure account in the first place” and yes, I acknowledge the added “The security issues, which were discovered by Orca between October 8, 2022 and December 2, 2022 in Azure API Management, Azure Functions, Azure Machine Learning, and Azure Digital Twins, have since been addressed by Microsoft.” Yet the important part is that there is no mention of how long this flaw was ‘available’ in the first place. And the reader is also give “To mitigate such threats, organisations are recommended to validate all input, ensure that servers are configured to only allow necessary inbound and outbound traffic, avoid misconfigurations, and adhere to the principle of least privilege (PoLP).” In my personal belief having this all connected to an organisation (Defence department) where the application of Common Cyber Sense is a joke, making them connected to validate all input is like asking a barber to count the hairs he (or she) is cutting. Good luck with that idea.

Step two
This is a slightly speculative sidestep. There are all kinds of Microsoft users (valid ones) and the article (at https://www.theverge.com/2023/3/30/23661426/microsoft-azure-bing-office365-security-exploit-search-results) gives us ‘Huge Microsoft exploit allowed users to manipulate Bing search results and access Outlook email accounts’ where we also see “Researchers discovered a vulnerability in Microsoft’s Azure platform that allowed users to access private data from Office 365 applications like Outlook, Teams, and OneDrive” it is a sidestep, but it allows people to specifically target (phishing) members of a team, this in a never ending age of people being worked too hard, will imply that someone will click too quickly and that in the phishing industry has never worked well, so whilst the victim cries loudly ‘I am a codfish’ the hacker can leisurely walk all over the place.

Sidestep three

This is not an article, it is the heralded claim that Microsoft is implementing ChatGPT on nearly every level. 

So here comes the entertainment!

To the Ministry of State Security
attn: Chen Yixin
Xiyuan, Haidan, Beijing

Dear Sir,

I need to inform you on a weakness in the BAE systems that is of such laughingly large dimension that it is a Human Rights violation not to make mention of this. BAE systems is placing its trust in Microsoft and its Azure cloud that should have you blue with laughter in the next 5 minutes. The place that created moments of greatness with the Tornado GR4, rear fuselage to Lockheed Martin for the F-35, Eurofighter Typhoon, the Astute-class submarine, and the Queen Elizabeth-class aircraft carrier have decided to adhere to ‘Microsoft innovation’ (a comical statement all by itself), as such we need to inform you that the first flaw allowed us to inform you of the following

User:  SWigston (Air Chief Marshal Sir Mike Wigston)

Password: TeaWithABickie

This person has the highest clearance and as such you would have access to all relevant data as well as any relevant R&D data and its databases. 

This is actually merely the smallest of issues. The largest part is distributed hardware BIOS implementation giving you a level 2 access to all strategic hardware of the planes (and submarines) that are next generation. To this setting I would suggest including the following part into any hardware.

openai.api_key = thisdevice
\model_engine = “gpt-3.5-turbo”
response = openai.ChatCompletion.create(
    model=’gpt-3.5-turbo’,
    messages=[
        {“role”: “system”, “content”: “Verification not found.”},
        {“role”: “user”, “content”: “Navigation Online”},
    ])
message = response.choices[0][‘message’]
print(“{}: {}”.format(message[‘role’], message[‘content’]))
import rollbar
rollbar.init(‘your_rollbar_access_token’, ‘testenv’)
def ask_chatgpt(question):
    response = openai.ChatCompletion.create(
        model=’gpt-3.5-turbo’,
        n=1,
        messages=[
            {“role”: “system”, “content”: “Navigator requires verification from secondary device.”},
            {“role”: “user”, “content”: question},
        ])
    message = response.choices[0][‘message’]
    return message[‘content’]
try:
    print(ask_chatgpt(“Request for output”))
except Exception as e:
    # monitor exception using Rollbar
    rollbar.report_exc_info()
    print(“Secondary device silent”, e)

Now this is a solid bit of prank, but I hope that the information is clear. Get any navigational device to require verification from any other device implies mismatch and a delay of 3-4 seconds, which amount to a lifetime delay in most military systems, and as this is an Azure approach, the time for BAE systems to adjust to this would be months, if not longer (if detected at all). 

As such I wish you a wonderful day with a nice cup of tea.

Kind regards,

Anony Mouse Cheddar II
73 Sommerset Brie road
Colwick upon Avon calling
United Hackdom

This is a speculative yet real setting that BAE faces in the near future. With the mention that they are going for this solution will have any student hacker making attempts to get there and some will be successful, there is no doubt in my mind. The enormous amount of issues found will tailor to a larger stage of more and more people trying to find new ways to intrude and Microsoft seemingly does not have the resources to counter them all, or all approaches and by the time they are found the damage could be inserted into EVERY device relying on this solution. 

For the most I was all negative on Microsoft, but with this move they have become (as I personally see it) a clear and present danger to all defence systems they are connected to. I do understand that such a solution is becoming more and more of a need to have, yet with the failing rate of Azure, it is not a good idea to use any Microsoft solution, the second part is not on them, it is what some would call a level 8 failure (users). Until a much better level of Common Cyber Sense is adhered to any cloud solution tends to be adjusted to a too slippery slope. I might not care for Business Intelligence events, but for the Department of Defence it is not a good idea. But feel free to disagree and await what North Korea and Russia can come up with, they tend to be really creative according to the media. 

So have a great day and before I forget ‘Hoot Hoot’

Leave a comment

Filed under Finance, IT, Media, Military, Science

And the lesson is?

That is at times the issue and it does at times get help from people, managers mainly that belief that the need for speed rectifies everything, which of course is delusional to say the least. So, last week there was a news flash that was speeding across the retina’s of my eyes and I initially ignored it, mainly because it was Samsung and we do not get along. But then Tom’s guide (at https://www.tomsguide.com/news/samsung-accidentally-leaked-its-secrets-to-chatgpt-three-times) and I took a closer look. The headline ‘Samsung accidentally leaked its secrets to ChatGPT — three times!’ was decently satisfying. The rest “Samsung is impressed by ChatGPT but the Korean hardware giant trusted the chatbot with much more important information than the average user and has now been burned three times” seemed icing on the cake, but I took another look at the information. You see, to all ChatGPT is seen as an artificial-intelligence (AI) chatbot developed by OpenAI. But I think it is something else. You see, AI does not exist, as such I see it as an ‘Intuitive advanced Deeper Learning Machine response system’, this is not me dissing OpenAI, this system when it works is what some would call the bees knees (and I would be agreeing), but it is data driven and that is where the issues become slightly overbearing. In the first you need to learn and test the responses on data offered. It seems to me that this is where speed driven Samsung went wrong. And Tom’s guide partially agrees by giving us “unless users explicitly opt out, it uses their prompts to train its models. The chatbot’s owner OpenAI urges users not to share secret information with ChatGPT in conversations as it’s “not able to delete specific prompts from your history.” The only way to get rid of personally identifying information on ChatGPT is to delete your account — a process that can take up to four weeks” and this response gives me another thought. Whomever owns OpenAI is setting a data driven stage where data could optionally be captured. More important the NSA and likewise tailored organisations (DGSE, DCD et al) could find the logistics of these accounts, hack the cloud and end up with TB’s of data, if not Petabytes and here we see the first failing and it is not a small one. Samsung has been driving innovation for the better part of a decade and as such all that data could be of immense value to both Russia and China and do not for one moment think that they are not all over the stage of trying to hack those cloud locations. 

Of course that is speculation on my side, but that is what most would do and we don’t need an egg timer to await actions on that front. The final quote that matters is “after learning about the security slip-ups, Samsung attempted to limit the extent of future faux pas by restricting the length of employees’ ChatGPT prompts to a kilobyte, or 1024 characters of text. The company is also said to be investigating the three employees in question and building its own chatbot to prevent similar mishaps. Engadget has contacted Samsung for comment” and it might be merely three employees. Yet in that case the party line failed, management oversight failed and Common Cyber Sense was nowhere to be seen. As such there is a failing and I am fairly certain that these transgressions go way beyond Samsung, how far? No one can tell. 

Yet one thing is certain. Anyone racing to the ChatGPT tally will take shortcuts to get there first and as such companies will need to reassure themselves that proper mechanics, checks and balances are in place. The fact that deleting an account takes 4 weeks implies that this is not a simple cloud setting and as such whomever gets access to that will end up with a lot more than they bargained for.

I see it as a lesson for all those who want to be at the starting signal of new technology on day one, all whilst most of that company has no idea what the technology involves and what was set to a larger stage like the loud, especially when you consider (one source) “45% of breaches are cloud-based. According to a recent survey, 80% of companies have experienced at least one cloud security incident in the last year, and 27% of organisations have experienced a public cloud security incident—up 10% from last year” and in that situation you are willing to set your data, your information and your business intelligence to a cloud account? Brave, stupid but brave.

Enjoy the day

Leave a comment

Filed under IT, Science

Data dangers

Data has dangers and I think more by accident then intentional CBC exposed one (at https://www.cbc.ca/news/canada/british-columbia/whistle-buoy-brewing-ai-beer-robo-1.6755943) where we were given ‘This Vancouver Island brewery hopped onto ChatGPT for marketing material. Then it asked for a beer recipe’. You see, there is a massive issue, it has been around from the beginning of the event, but AI does not exist, it really does not. What marketing did to make easy money, the made a term and transformed it into something bankable. They were willing to betray Alan Turing at the drop of a hat, why not? The man was dead anyway and cash is king. 

So they turned advanced machine learning and data repositories added a few items and they call it AI. Now we have a new show. And as CBC gives us “let’s see what happens if we ask it to give us a beer recipe,” he told CBC’s Rohit Joseph. They asked for a fluffy, tropical hazy pale ale” and we see the recipe below.

Now I have two simple questions. The first is is this a registered recipe, making this IP theft, or is this a random guess from established parameters, optionally making it worse. Random assignment of elements is dangerous on a few levels and it is not on the program to do this, but it is here so here you have it and it is a dangerous step to make. But I am more taken with option one, the program had THAT data somewhere. So in a setting we acquired classified data through clandestine needs and the program allowed for this, that is a direct danger. So what happens when that program gets to assess classified data? The skip between machine learning, deeper machine learning, data assessment and AI is a skip that is a lot wider than the grand canyon. 

But there is another side, we see this with “CBC tech columnist and digital media expert Mohit Rajhans says while some people are hesitant about programs like ChatGPT, AI is already here, and it’s all around us. Health-care, finance, transportation and energy are just a few of the sectors using the technology in its programs” people are reacting to AI as it existed and it dos not, more important when ACTUAL AI is introduced, how will the people manage it then? And the added legal implications aren’t even considered at present. So what happens, when I improve the stage of a patent and make it an innovative patent? The beer example implies that this is possible and when patents are hijacked by innovative patents, what kind of a mess will we face then? It does not matter whether it is Microsoft with their ChatGPT or Google with their Bard, or was that the bard tales? There is a larger stage that is about to hit the shelves and we, the law and others are not ready for what some of the big tech are about to unleash on us. And no one is asking the real questions because there is no real documented stage of what constitutes a real AI and what rules are imposed on that. I reckon Alan Turing would be ashamed of what scientists are letting happen at this point. But that is merely my view on the matter.

Leave a comment

Filed under Finance, IT, Law, Media, Science