Tag Archives: Borland

Speculating on language

That was the setting I found myself in. There is the specific on an actual AI language, not the ones we have, but the one we need to create. You see, we might be getting close to trinary chips. You see, as I personally see it, there is no AI as the settings aren’t ready for it (I’ve told that before), but we might be getting close to it as the Dutch physicist has had a decade to set the premise of the proven Epsilon particle to a more robust setting and it has been a decade (or close to it) and that sets the larger premise that an actual AI might become a reality (were still at least a decade away), but in that setting we need to reconsider the programming language. 

BinaryTrinary
NULLNULL
TRUETRUE
FALSEFALSE

BOTH

We are in a binary digital world at present and it has served our purpose, but for an actual AI it does not suffice. You can believe the wannabe’s going on about we can do this, we can do that and it will come up short. Wannabe’s who will hide behind data tables in data tables solutions and for the most (as far as I saw it) only Oracle ever got that setting to work correctly. The rest merely grazes on that premise. You see, to explain this in the simplest of ways. Any intelligence doesn’t hide behind black or white. It is a malleable setting of grey, as such both colors are required and that is where Trinary systems with both true and false activated will create the setting an AI needs. When you realise this, you see the bungles the business world needs to hide behind. They will sell these programmers (or engineers) down the drain at a moments notice (they will refer to it as corporate restructuring) and that will put thousands out of a job and the largest data providers in class action suits from start to up the wazoo. 

When you see what I figured out a decade ago, the entire “AI” field is driven to nothing short of collapse. 

My mind kept it in the back of my mind and it worked on the solutions it had figured out. So as I see it something like C#+ is required. An extended version of C# with LISP libraries (the IBM version) as the only one I also had was a Borland program and I don’t think it will make the grade. As I personally see it (with my lack of knowledge) is that LISP might be a better fit to connect to C#. You see, this is the next step. As I see it ‘upgrading’ C# is one setting, but LISP has the connectors required to make it work and why reinvent the wheel? And when the greedy salespeople figure out what they missed over the last decade (the larger part of it) they will come with statements that it was a work in progress and that they are still addressing certain items. Weird, I got there a decade ago and they didn’t think I was the right material. As such you can file their versions in a folder called ‘What makes the grass grow in Texas?’ (Me having a silly grin now). I still haven’t figured it all out, but with the trinary chip we will be on the verge of getting an actual AI working. Alas, the chip comes long after we bid farewell to Alan Turing as he would have been delighted to see that moment happen. The setting of gradual verification, a setting of data getting verified on the fly will be the next best thing and when the processor gives us grey scales that matter, we will see that contemplated ideas that will drive any actual AI system forward. It will not be pretty at the start. I reckon that IBM, Google and Amazon will drive this And there is a chance that they all will unite with Adobe to make new strides. You think I am kidding, but I am not. You see, I refer to greyscales on purpose. The setting of true and false is only partially true. The combination of the approach of BOTH will drive solutions and the idea of both bing replaced through channels of grey (both true and false) will be in first a hindrance and when you translate this to greyscales, the Adobe approach will start making sense. Adobe excels in this field and when we set the ‘colorful’ approach of both True and False, we get a new dimension and Adobe has worked in that setting for decades, long before the Trinary idea became a reality. 

So is this a figment of my imagination?
It is a fair question. As I said there is a lot of speculation through the date here and as I see it, there is a decent reason to doubt me. I will not deny this, but those deep into DML and LLM’s will see that I am speaking true, not false and that is the start of the next cycle. A setting where LISP is adjusted for trinary chips will be the larger concern. And I got to that point at least half a decade ago. So when Google and Amazon figure out what to do we get a new dance floor, a boxing square where the lights influences the shadows and that will lead to the next iteration of this solution. Consider one of two flawed visions. One is that a fourth dimension cases a 3D shadow, by illuminating the concept of these multiple 3D shadows the computer can work out 4D data constraints. The image of a dot was the shade of a line, the image of a 2D shape was the shadow of a 3D image and so on. When the AI gets that consideration (this is a flaky example, but it is the one that is in my mind) and it can see the multitude of 3D images, it can figure out the truth of the 4D datasets and it can actually fill in the blanks. Not the setting that NIP gives us now, like a chess computer that has all the games of history in its mind, so it can figure out with some precision what comes next. That concept can be defeated by making what some chess players call ‘A silly move’, now we are in the setting of more as BOTH allows for more and the stage can be illustrated by an actual AI to figure out what should be really likely to be there. Not guess work, but the different images make a setting of nonrepudiation to a larger degree, the image could only have been gotten by what should have been there in the first place. And that is a massive calculation, don’t think it won’t be deniable, the data that Nth 3D images gives us set the larger solution to a given fact. It is the result of 3 seconds of calculations, the result to a setting the brain could not work out in months. 

It is the next step. At that point the computer will not take an educated guess, it will figure out what the singular solution would be. The setting that the added BOTH allows for. 

A proud setting as I might actually still be alive to see this reality come to pass. I doubt I will be alive to see the actual emergence of an Artificial Intelligence, but the start on that track was made in my lifetime. And with the other (unmentioned) fact, I am feeling pretty proud today. And it isn’t even lunchtime yet. Go figure.

Have a great day today.

Leave a comment

Filed under Finance, IT, Science

Delphi in a name

Yup, we are talking about Oracle, not Borland. And whenever I hear Oracle I tend to add the ‘of Delphi’ automatically. It is a Pavlovian thing. This is nothing negative about Oracle, I wanted to join their ranks in the 90’s, and beyond the millennium a few times too. My origin settings was a database programmer (I earned my stripes with Clipper, the Nantucket version). I think it is the very first program where I shelled out $650 (Dfl. 1,200) for a program and I learned a lot through Clipper. I also got the Clipper notes (Norton Notes) and these two kept my in my apartment (on a desk chair) for weeks and weeks at a time. I relish these happy days. Then of course I got into technical support and customer care through a precursor of IBM and my life at that point was pretty complete. I miss those days and I still think fondly of them. Not so much the upper ranks of that company with their political games, but them I was never a political player. 

So when I saw ‘Oracle commits to invest $14bn in Saudi Arabia over next 10 years’ (at https://www.datacenterdynamics.com/en/news/oracle-commits-to-invest-14bn-in-saudi-arabia-over-next-10-years/) my mind starting swirling and twirling (sorry JK Rowling) and my creative logging started to set new parameters. 

You see, we are given “Oracle has committed to investing $14 billion in Saudi Arabia over the next 10 years to expand its cloud and AI offerings in the region. The plans were announced by the company on May 13, and in the wake of President Donald Trump’s visit to the Kingdom” this implies Technical Support, Customer Care and Trainings. Things I can do (all three) and I have had well over a decade of experience in these sections. As such I keep my eyes open for positions needed in either Riyadh, Mississauga or Abu Dhabi. I reckon that the investments are not just for Saudi Arabia, they are all spend in Saudi Arabia, but there will be essentially needed persons in Abu Dhabi because no one walks away from ADNOC and with ARAMCO in Saudi Arabia, a secondary call center would be needed in Abu Dhabi. And they too will have all three settings in that centre, beyond that I reckon that it will a location will be cheaper in the heart of ADNOC than in Dubai, so there.

When we see “Our expanded partnership with the Kingdom will create new opportunities for its economy, deliver better health outcomes for its people, and fortify its alliance with the United States, which will create a ripple effect of peace and prosperity across the Middle East and around the world.” The words “a ripple effect of peace and prosperity across the Middle East” merely implies (not confirms) the setting I see. You see, it makes sense to do this, but it requires knowledge of Oracle policies (and I don’t know those).

So when we see “Oracle has two existing cloud regions in Saudi Arabia – Saudi Arabia West, located in Jeddah, and Saudi Arabia Central in Riyadh. The former was launched in 2020, the latter launched in 2024, and is hosted in a Center3 data center. The company has been planning a third in the upcoming Neom City since October 2021, which remains listed on Oracle’s website as “coming soon.”” Someone would think that another cloud the UAE cloud should be there as well. Merely not mentioned in this stage, but ADNOC is too big to walk away from and Microsoft has dropped the ball too many times. There is a setting that implies that IBM and or AWS are already there, but that gives the larger setting that ADNOC becomes dependent on one supplier and they are as smart as they come. So I am betting that Oracle has that region (as well as Dubai) in mind when we consider DAMAC (valued at US$ 595 million) with the total revenue recorded by DAMAC Properties was AED 7.5 billion (2017), and they are not all. There is also Emaar Properties, which is said to be the biggest of them all and that are the kind of clients Oracle really likes to keep happy, as such I saw the stage evolve, even though they are already there and in January 2025 we were given ‘Oracle to increase Abu Dhabi investment five-fold’, as such I think that there might be a new need to seek employment with Oracle. Now add to that the quote “Earlier this month, the Abu Dhabi government put out a call for the development of a single multi-cloud system that will serve more than 40 government entities” and you’ll see that there might be space for me too, either in Abu Dhabi or in Mississauga and the two cover a little over 20 hours a day coverage in a 24:7 setting. The nice part is that it takes time to get people up to speed, so I might have an advantage (merely a slight one). 

So as I am about to dream the day away on this rainy Sunday. I see the cogs of industry revolve around the settings of the world and I keep having happy thoughts.

So have a great day everyone, preferably less rainy than it is here.

Leave a comment

Filed under Finance, IT, Media, Science

Ghost in the Deus Ex Machina

James Bridle is treating the readers of the Guardian to a spotlight event. It is a fantastic article that you must read (at https://www.theguardian.com/books/2018/jun/15/rise-of-the-machines-has-technology-evolved-beyond-our-control-?). Even as it starts with “Technology is starting to behave in intelligent and unpredictable ways that even its creators don’t understand. As machines increasingly shape global events, how can we regain control?” I am not certain that it is correct; it is merely a very valid point of view. This setting is being pushed even further by places like Microsoft Azure, Google Cloud and AWS we are moving into new territories and the experts required have not been schooled yet. It is (as I personally see it) the consequence of next generation programming, on the framework of cloud systems that have thousands of additional unused, or un-monitored parameters (read: some of them mere properties) and the scope of these systems are growing. Each developer is making their own app-box and they are working together, yet in many cases hundreds of properties are ignored, giving us weird results. There is actually (from the description James Bridle gives) an early 90’s example, which is not the same, but it illustrates the event.

A program had windows settings and sometimes there would be a ghost window. There was no explanation, and no one could figure it out why it happened, because it did not always happen, but it could be replicated. In the end, the programmer was lazy and had created a global variable that had the identical name as a visibility property and due to a glitch that setting got copied. When the system did a reset on the window, all but very specific properties were reset. You see, those elements were not ‘true’, they should be either ‘true’ or ‘false’ and that was not the case, those elements had the initial value of ‘null’ yet the reset would not allow for that, so once given a reset they would not return to the ‘null’ setting but remain to hold the value it last had. It was fixed at some point, but the logic remains, a value could not return to ‘null’ unless specifically programmed. Over time these systems got to be more intelligent and that issue had not returned, so is the evolution of systems. Now it becomes a larger issue, now we have systems that are better, larger and in some cases isolated. Yet, is that always the issue? What happens when an error level surpasses two systems? Is that even possible? Now, moist people will state that I do not know what I am talking about. Yet, they forgot that any system is merely as stupid as the maker allows it to be, so in 2010 Sha Li and Xiaoming Li from the Dept. of Electrical and Computer Engineering at the University of Delaware gave us ‘Soft error propagation in floating-point programs‘ which gives us exactly that. You see, the abstract gives us “Recent studies have tried to address soft errors with error detection and correction techniques such as error correcting codes and redundant execution. However, these techniques come at a cost of additional storage or lower performance. In this paper, we present a different approach to address soft errors. We start from building a quantitative understanding of the error propagation in software and propose a systematic evaluation of the impact of bit flip caused by soft errors on floating-point operations“, we can translate this into ‘A option to deal with shoddy programming‘, which is not entirely wrong, but the essential truth is that hardware makers, OS designers and Application makers all have their own error system, each of them has a much larger system than any requires and some overlap and some do not. The issue is optionally speculatively seen in ‘these techniques come at a cost of additional storage or lower performance‘, now consider the greed driven makers that do not want to sacrifice storage and will not handover performance, not one way, not the other way, but a system that tolerates either way. Yet this still has a level one setting (Cisco joke) that hardware is ruler, so the settings will remain and it merely takes one third party developer to use some specific uncontrolled error hit with automated assumption driven slicing and dicing to avoid storage as well as performance, yet once given to the hardware, it will not forget, so now we have some speculative ‘ghost in the machine’, a mere collection of error settings and properties waiting to be interacted with. Don’t think that this is not in existence, the paper gives a light on this in part with: “some soft errors can be tolerated if the error in results is smaller than the intrinsic inaccuracy of floating-point representations or within a predefined range. We focus on analysing error propagation for floating-point arithmetic operations. Our approach is motivated by interval analysis. We model the rounding effect of floating-point numbers, which enable us to simulate and predict the error propagation for single floating-point arithmetic operations for specific soft errors. In other words, we model and simulate the relation between the bit flip rate, which is determined by soft errors in hardware, and the error of floating-point arithmetic operations“. That I can illustrate with my earliest errors in programming (decades ago). With Borland C++ I got my first taste of programming and I was in assumption mode to make my first calculation, which gave in the end: 8/4=2.0000000000000003, at that point (1991) I had no clue about floating point issues. I did not realise that this was merely the machine and me not giving it the right setting. So now we all learned that part, we forgot that all these new systems all have their own quirks and they have hidden settings that we basically do not comprehend as the systems are too new. This now all interacts with an article in the Verge from January (at https://www.theverge.com/2018/1/17/16901126/google-cloud-ai-services-automl), the title ‘Google’s new cloud service lets you train your own AI tools, no coding knowledge required‘ is a bit of a giveaway. Even when we see: “Currently, only a handful of businesses in the world have access to the talent and budgets needed to fully appreciate the advancements of ML and AI. There’s a very limited number of people that can create advanced machine learning models”, it is not merely that part, behind it were makers of the systems and the apps that allow you to interface, that is where we see the hidden parts that will not be uncovered for perhaps years or decades. That is not a flaw from Google, or an error in their thinking. The mere realisation of ‘a long road ahead if we want to bring AI to everyone‘, that in light of the better programmers, the clever people and the mere wildcards who turn 180 degrees in a one way street cannot be predicted and there always will be one that does so, because they figured out a shortcut. Consider a sidestep

A small sidestep

When we consider risk based thinking and development, we tend to think in opposition, because it is not the issue of Risk, or the given of opportunity. We start in the flaw that we see differently on what constitutes risk. Even as the makers all think the same, the users do not always behave that way. For this I need to go back to the late 80’s when I discovered that certain books in the Port of Rotterdam were cooked. No one had figured it out, but I recognised one part through my Merchant Naval education. The one rule no one looked at in those days, programmers just were not given that element. In a port there is one rule that computers could not comprehend in those days. The concept of ‘Idle Time’ cannot ever be a linear one. Once I saw that, I knew where to look. So when we get back to risk management issues, we see ‘An opportunity is a possible action that can be taken, we need to decide. So this opportunity requires we decide on taking action and that risk is something that actions enable to become an actual event to occur but is ultimately outside of your direct control‘. Now consider that risk changes by the tide at a seaport, but we forgot that in opposition of a Kings tide, there is also at times a Neap tide. A ‘supermoon’ is an event that makes the low tide even lower. So now we see the risk of betting beached for up to 6 hours, because the element was forgotten. the fact that it can happen once every 18 months makes the risk low and it does not impact everyone everywhere, but that setting shows that once someone takes a shortcut, we see that the dangers (read: risks) of events are intensified when a clever person takes a shortcut. So when NASA gives us “The farthest point in this ellipse is called the apogee. Its closest point is the perigee. During every 27-day orbit around Earth, the Moon reaches both its apogee and perigee. Full moons can occur at any point along the Moon’s elliptical path, but when a full moon occurs at or near the perigee, it looks slightly larger and brighter than a typical full moon. That’s what the term “supermoon” refers to“. So now the programmer needed a space monkey (or tables) and when we consider the shortcut, he merely needed them for once every 18 months, in the life cycle of a program that means he merely had a risk 2-3 times during the lifespan of the application. So tell me, how many programmers would have taken the shortcut? Now this is the settings we see in optional Machine Learning. With that part accepted and pragmatic ‘Let’s keep it simple for now‘, which we all could have accepted in this. But the issue comes when we combine error flags with shortcuts.

So we get to the guardian with two parts. The first: Something deeply weird is occurring within these massively accelerated, opaque markets. On 6 May 2010, the Dow Jones opened lower than the previous day, falling slowly over the next few hours in response to the debt crisis in Greece. But at 2.42pm, the index started to fall rapidly. In less than five minutes, more than 600 points were wiped off the market. At its lowest point, the index was nearly 1,000 points below the previous day’s average“, the second being “In the chaos of those 25 minutes, 2bn shares, worth $56bn, changed hands. Even more worryingly, many orders were executed at what the Securities Exchange Commission called “irrational prices”: as low as a penny, or as high as $100,000. The event became known as the “flash crash”, and it is still being investigated and argued over years later“. In 8 years the algorithm and the systems have advanced and the original settings no longer exist. Yet the entire setting of error flagging and the use of elements and properties are still on the board, even as they evolved and the systems became stronger, new systems interacted with much faster and stronger hardware changing the calculating events. So when we see “While traders might have played a longer game, the machines, faced with uncertainty, got out as quickly as possible“, they were uncaught elements in a system that was truly clever (read: had more data to work with) and as we are introduced to “Among the various HFT programs, many had hard-coded sell points: prices at which they were programmed to sell their stocks immediately. As prices started to fall, groups of programs were triggered to sell at the same time. As each waypoint was passed, the subsequent price fall triggered another set of algorithms to automatically sell their stocks, producing a feedback effect“, the mere realisation that machine wins every time in a man versus machine way, but only toward the calculations. The initial part I mentioned regarding really low tides was ignored, so as the person realises that at some point the tide goes back up, no matter what, the machine never learned that part, because the ‘supermoon cycle’ was avoided due to pragmatism and we see that in the Guardian article with: ‘Flash crashes are now a recognised feature of augmented markets, but are still poorly understood‘. That reason remains speculative, but what if it is not the software? What if there is merely one set of definitions missing because the human factor auto corrects for that through insight and common sense? I can relate to that by setting the ‘insight’ that a supermoon happens perhaps once every 18 months and the common sense that it returns to normal within a day. Now, are we missing out on the opportunity of using a Neap Tide as an opportunity? It is merely an opportunity if another person fails to act on such a Neap tide. Yet in finance it is not merely a neap tide, it is an optional artificial wave that can change the waves when one system triggers another, and in nano seconds we have no way of predicting it, merely over time the option to recognise it at best (speculatively speaking).

We see a variation of this in the Go-game part of the article. When we see “AlphaGo played a move that stunned Sedol, placing one of its stones on the far side of the board. “That’s a very strange move,” said one commentator“, you see it opened us up to something else. So when we see “AlphaGo’s engineers developed its software by feeding a neural network millions of moves by expert Go players, and then getting it to play itself millions of times more, developing strategies that outstripped those of human players. But its own representation of those strategies is illegible: we can see the moves it made, but not how it decided to make them“. That is where I personally see the flaw. You see, it did not decide, it merely played every variation possible, the once a person will never consider, because it played millions of games , which at 2 games a day represents 1,370 years the computer ‘learned’ that the human never countered ‘a weird move’ before, some can be corrected for, but that one offers opportunity, whilst at the same time exposing its opponent to additional risks. Now it is merely a simple calculation and the human loses. And as every human player lacks the ability to play for a millennium, the hardware wins, always after that. The computer never learned desire, or human time constraints, as long as it has energy it never stops.

The article is amazing and showed me a few things I only partially knew, and one I never knew. It is an eye opener in many ways, because we are at the dawn of what is advanced machine learning and as soon as quantum computing is an actual reality we will get systems with the setting that we see in the Upsilon meson (Y). Leon Lederman discovered it in 1977, so now we have a particle that is not merely off or on, it can be: null, off, on or both. An essential setting for something that will be close to true AI, a new way of computers to truly surpass their makers and an optional tool to unlock the universe, or perhaps merely a clever way to integrate hardware and software on the same layer?

What I got from the article is the realisation that the entire IT industry is moving faster and faster and most people have no chance to stay up to date with it. Even when we look at publications from 2 years ago. These systems have already been surpassed by players like Google, reducing storage to a mere cent per gigabyte and that is not all, the media and entertainment are offered great leaps too, when we consider the partnership between Google and Teradici we see another path. When we see “By moving graphics workloads away from traditional workstations, many companies are beginning to realize that the cloud provides the security and flexibility that they’re looking for“, we might not see the scope of all this. So the article (at https://connect.teradici.com/blog/evolution-in-the-media-entertainment-industry-is-underway) gives us “Cloud Access Software allows Media and Entertainment companies to securely visualize and interact with media workloads from anywhere“, which might be the ‘big load’ but it actually is not. This approach gives light to something not seen before. When we consider makers from software like Q Research Software and Tableau Software: Business Intelligence and Analytics we see an optional shift, under these conditions, there is now a setting where a clever analyst with merely a netbook and a decent connection can set up the work frame of producing dashboards and result presentations from that will allow the analyst to produce the results and presentations for the bulk of all Fortune 500 companies in a mere day, making 62% of that workforce obsolete. In addition we see: “As demonstrated at the event, the benefits of moving to the cloud for Media & Entertainment companies are endless (enhanced security, superior remote user experience, etc.). And with today’s ever-changing landscape, it’s imperative to keep up. Google and Teradici are offering solutions that will not only help companies keep up with the evolution, but to excel and reap the benefits that cloud computing has to offer“. I take it one step further, as the presentation to stakeholders and shareholders is about telling ‘a story’, the ability to do so and adjust the story on the go allows for a lot more, the question is no longer the setting of such systems, it is not reduced to correctly vetting the data used, the moment that falls away we will get a machine driven presentation of settings the machine need no longer comprehend, and as long as the story is accepted and swallowed, we will not question the data. A mere presented grey scale with filtered out extremes. In the end we all signed up for this and the status quo of big business remains stable and unchanging no matter what the economy does in the short run.

Cognitive thinking from the AI thought the use of data, merely because we can no longer catch up and in that we lose the reasoning and comprehension of data at the high levels we should have.

I wonder as a technocrat how many victims we will create in this way.

 

Leave a comment

Filed under Finance, IT, Media, Science