Tag Archives: deeper machine learning

Folly and opportunity

Yup, a setting that has both. You see yesterday I offered the quote “I made mention of Deeper Machine Learning. This is awesome, it is not AI (AI does not yet exist) but it got me thinking. You see, we now see mention of AI in construction. This is about to go bad, really bad and Trusting these buildings will become folly soon enough. I will try to explain that soon enough” and that soon is now. To see this we need to make a few sidesteps, but it will be clear soon enough. For this I selected ‘Building a smarter future: The impact of big data and AI in construction’ (at https://www.pbctoday.co.uk/news/digital-construction-news/big-data-and-ai-in-construction-trimble/132005/) there are several sources, but this one got a few things really right and that matters to me. They give you “Because computers can be programmed to analyse questions and situations using thousands of parameters in the time it takes most of us to type them in, they’re an incredible tool that we can use to do complex calculations in a fraction of the time it takes any human, and because they approach every situation with logic, they can make the most rational decisions even when we can’t. Artificial intelligence in construction simply takes that to the next level, applying machine learning, which allows those same computers to learn from situations they’ve encountered before and to adjust their results accordingly.” I do not fully agree, but they give a better explanation then most others and they made the big good one by giving us ‘applying machine learning’ this is correct. 

Why is this what?
That is the setting, you see to see this I will need to take you on a little time travel. That is after you realise that machine learning depends on data, loads of it. But in all this the right category is also important. We are about to overlap best practice and best results onto the cheaper way, the cutting corners way. We might rely on movies like the towering inferno (1974) where the movie based on two books namely the Glass inferno and the tower. In the movie we see the bastardly electrical engineer who cut corners (played by Richard Chamberlain) and the architect played by Paul Newman. There we see the little conversation that the electrical engineer Roger Simmons kept to building codes and that the demands by the architect Doug Roberts were outlandish and to cost driving and fair enough, the building burns down on opening night.

Children of Mediocrates
The previous one was a story, fiction. But reality is not. In the 90’s captains of industry shook hands with politicians and a lacking drive was introduced. Almost like the philosopher Mediocrates who introduced a new life lesson ‘Meh, good enough’. I was actually in some of those meetings where we were told. “What if the strive of excellence is not 100%, but 80%. What had is it to be still really good. How much easier is it to build your bonus when we expect a 80% line?” I was there, I heard it all and I was told to adhere to it all. And yes the bonus for me was easier and I was merely in customer service, but it felt wrong. 

Nowadays
So back to today when we look at the application of what some call AI (a wrong term). The data it relies on cannot tell the difference because best practice and cutting corners are all the same thing and it will set a flawed recommendation and the larger folly is that the people in control of that data will not distinguish between the two fronts either. They are to young to tell, or they cannot tell the difference, because those filling their pockets are no longer around. It is a recipe for disaster and when was the last time when construction disasters went without casualties? 

This is the setting I see coming and there is also an opportunity. You see, those cutting corners did not protect the original path. As such these patents and IP points are now open and unprotected. As such these options are there for the clever people to create new innovation patents based on the open original patents, the ones the cutting corners people let be and there should be a fair amount of them all over the field. This is merely because best practice was too expensive for them and now those options are open. An example here might be the Reinforced autoclaved aerated concrete (RAAC). We are now seeing all the issues and the hundreds of buildings that have them. It was an invention in the 1990’s, making the timeline fit. And now we see “Concerns were amplified in 2023 following reports of an earlier roofing collapse at a British primary school, which fell without warning in 2018” Now, one does not mean the other, but there is a premise that fits and as such we see the larger danger. Consider that this all gained popularity in the 50’s. So how many new patents were created based on this idea, and what was left behind and unprotected? I will let you do the math, but whomever has those innovation patents will have the option to fill there pockets with the best practice approach whilst too many are merely in it to make a buck. As such the folly of hiding behind AI is about to hit a lot of people squarely in the face, all whilst the clever people will be able to turn a coin as they have the patents and they will be the only player to be considered soon enough.

Hiding behind hyper words suddenly gives others a chance to become serious players where the big boys never wanted them. How is that for poetic justice?

Enjoy the day, most of the week is still in front of you.

 

Leave a comment

Filed under Finance, IT, Politics, Science

Eric Winter is a god

Yup, we are going there. It might not be correct, but that is where the evidence is leading us. You see I got hooked on the Rookie and watched seasons one through four in a week. Yet the name Eric Winter was bugging me and I did not know why. The reason was simple. He also starred in the PS4 game ‘Beyond two souls’ which I played in 2013. I liked that game and his name stuck somehow. Yet when I looked for his name I got

This got me curious, two of the movies I saw and Eric would have been too young to be in them and there is the evidence, presented by Google. Eric Winter born on July 17th 1976 played alongside Barbara Streisand 4 years before he was born, evidence of godhood. 

And when we look at the character list, there he is. 

Yet when we look at a real movie reference like IMDB.com we will get 

Yes, that is the real person who was in the movie. We can write this up as a simple error, but that is not the path we are trodding on. You see, people are all about AI and ChatGPT but the real part is that AI does not exist (not yet anyway). This is machine learning and deeper machine learning and this is prone to HUMAN error. If there is only 1% error and we are looking at about 500,000 movies made, that implies that the movie reference alone will contain 5,000 errors. Now consider this on data of al kinds and you might start to see the picture shape. When it comes to financial data and your advisor is not Sam Bankman-Fried, but Samual Brokeman-Fries (a fast-food employee), how secure are your funds then? To be honest, whenever I see some AI reference I got a little pissed off. AI does not exist and it was called into existence by salespeople too cheap and too lazy to do their job and explain Deeper Machine Learning to people (my view on the matter) and things do not end here. One source gives us “The primary problem is that while the answers that ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” another source gives us issues with capacity, plagiarism and cheating, racism, sexism, and bias, as well as accuracy problems and the shady way it was trained. That is the kicker. An AI does not need to be trained and it would compare the actors date of birth with the release of the movie making The Changeling and What’s up Doc? falling into the net of inaccuracy. This is not happening and the people behind ChatGPT are happy to point at you for handing them inaccurate data, but that is the point of an AI and its shallow circuits to find the inaccuracies and determine the proper result (like a movie list without these two mentions). 

And now we get the source Digital Trends (at https://www.digitaltrends.com/computing/the-6-biggest-problems-with-chatgpt-right-now/) who gave us “ChatGPT is based on a constantly learning algorithm that not only scrapes information from the internet but also gathers corrections based on user interaction. However, a Time investigative report uncovered that OpenAI utilised a team in Kenya in order to train the chatbot against disturbing content, including child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest. According to the report, OpenAI worked with the San Francisco firm, Sama, which outsourced the task to its four-person team in Kenya to label various content as offensive. For their efforts, the employees were paid $2 per hour.” I have done data cleaning for years and I can tell you that I cost a lot more then $2 per hour. Accuracy and cutting costs, give me one real stage where that actually worked? Now the error at Google was a funny one and you know in the stage of Melissa O’Neil a real Canadian telling Eric Winter that she had feelings for him (punking him in an awesome way). We can see that this is a simple error, but these are the errors that places like ChatGPT is facing too and as such the people employing systems like ChatGPT, which over time as Microsoft is staging this in Azure (it already seems to be), this stage will get you all in a massive amount of trouble. It might be speculative, but consider the evidence out there. Consider the errors that you face on a regular base and consider how high paid accountants mad marketeers lose their job for rounding errors. You really want to rely on a $2 per hour person to keep your data clean? For this merely look at the ABC article on June 9th 2023 where we were given ‘Lawyers in the United States blame ChatGPT for tricking them into citing fake court cases’. Accuracy anyone? Consider that against a court case that was fake, but in reality they were court cases that were actually invented by the artificial intelligence-powered chatbot. 

In the end I liked my version better, Eric Winter is a god. Equally not as accurate as reality, but more easily swallowed by all who read it, it was the funny event that gets you through the week. 

Have a fun day.

1 Comment

Filed under Finance, IT, Science

Prototyping rhymes with dotty

This is the setting we faced when we see ‘ChatGPT: US lawyer admits using AI for case research’ (at https://www.bbc.com/news/world-us-canada-65735769). You see as I have stated before, AI does not yet exist. Whatever is now is data driven, unverified data driven no less, so even in machine learning and even deeper machine learning data is key. So when I read “A judge said the court was faced with an “unprecedented circumstance” after a filing was found to reference example legal cases that did not exist.” I see a much larger failing. You might see it too when you read “The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward. But the airline’s lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.” You see, a case reference is ‘12-10576 – Worlds, Inc. v. Activision Blizzard, Inc. et al’. This is not new, it has been a case for decades, so when we take note of “the airline’s lawyers later wrote to the judge to say they could not find several of the cases” we can tell that the legal team of the man is screwed. You see they were unprepared as such the airline wins. A simple setting, not an unprecedented circumstance. The legal team did not do its job and the man could sue his own legal team now. As well as “Mr Schwartz added that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”.” The joke is close to complete. You see a law student learns in his (or her) first semester what sources to use. I learned that Austlii and Jade were the good sources, as well as a few others. The US probably has other sources to check. As such relying on ChatGPT is massively stupid. It does not has any record of courts, or better stated ChatGPT would need to have the data on EVERY court case in the US and the people who do have it are not handing it out. It is their IP, their value. And until ChatGPT gets all that data it cannot function. The fact that it relied on non-existing court cases implies that the data is flawed, unverified and not fit for anything. Like any software solution 2-5 years before it hits the Alpha status. And that legal team is not done with the BS paragraph. We see that with “He has vowed to never use AI to “supplement” his legal research in future “without absolute verification of its authenticity”.” Why is it BS? He used supplement in the first, which implies he had more sources and the second is clear, AI does not (yet) exist. It is a sales hype for lazy sales people who cannot sell Machine Learning and Deeper Machine Learning. 

And the screw ups kept on coming. With “Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT. “Is varghese a real case,” reads one message, referencing Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find. ChatGPT responds that yes, it is – prompting “S” to ask: “What is your source”.

After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.” The natural question is the verification part to check Westlaw and LexisNexis which are real and good sources. So either would spew out the links with searches like ‘Varghese’ or ‘Varghese v. China Southern Airlines Co Ltd’, with saved links and printed results. Any first year law student could get you that. It seems that this was not done. This is not on ChatGPT, this is on lazy researchers not doing their job and that is clearly in the limelight here. 

So when we get to “Both lawyers, who work for the firm Levidow, Levidow & Oberman, have been ordered to explain why they should not be disciplined at an 8 June hearing.” I merely wonder whether they still have a job after that and I reckon that it is plainly clear no one will ever hire them again. 

So how does prototyping rhyme with dotty? It does not, but if you rely on ChatGPT you should have seen that coming a mile away. 

Enjoy your first working day after the weekend.

1 Comment

Filed under IT, Law, Media, Science

Looky looky

It is always nice to go to bed, listen to music and dream away. That is until this flipping brain of mine gets a new idea. In this case it is not new IP, but a new setting for a group of people. You see, during lockdown I got hooked on walk video’s. It was a way to see places I had never visited before, it is one way to get around and weirdly enough, these walk videos are cool. You see more than you usually do (especially in London) most of them are actually quite good, a few need tinkering (like music not so loud) but for the most they are a decent experience. Then I thought what if GoPro makes a change, offering a new stage. That got me going, you see, most walks are on a stick, decent but intense for the filming party. So we can set the movie from a shoulder mount, a chest mount, or helmet mount. Yet what is filmed? So what happens if we have something like Google glasses and the left (or right) eye shows what we see in the film. We get all kind of degrees of filming. And if we want to ignore it, we merely close that eye for a moment. I am surprised that GoPro had not considered it, or perhaps they did. Consider that the filmer now has BOTH hands free and can hold something towards the camera, the filming agent can do more and move more freely. Consider that is works with a holder, but there is a need (in many cases) to have both hands available. And perhaps there is a need for both, the need to use one hand for precision and a gooseneck mount to keep both hands free. The interesting part is that there is no setting to get the image on something like Google Glasses and that is a shame, was I the first to think of it? It seems weird with all the city walks out there on YouTube, but there you have it and in that light, I was considering revisiting the IP I had for a next Watchdogs, one with a difference (every Ip creator will tell you that part), but I reckon that is a stage we will visit again soon enough, it involves Google Glasses and another setting that I will revisit. Just like the stage of combining deeper machine learning to a lens (or google glasses), a camera lens that offer direct translations, and the fun part is we can select if that is pushed through to film, or merely seen by us, now consider filming in Japan with machine learning and deeper machine learning auto translating ANY sign it sees. Languages that we do not know will no longer stop us, it will tell the filmmaker where they are and consider linking that to one lens in google glasses that overlays the map? It that out yet? I never saw it and there are all kinds of needs for that part. What you see is what you know, if you know the language. Just a thought at 01:17. I need a hobby, I really do!

3 Comments

Filed under IT, Media, Science