Tag Archives: Data

The path we make

The path we make is often set, for one, you cannot walk the path of (fake) AI without considering the side-roads called Data Verification and Data Validation. They are intertwined. And whenever I get to Data Validation, NASA tends to be own my mind. They have been on the Data Validation path as early as the 70’s, long before whomever runs IBM/Microsoft/Google now, they were already looking at ways to support their validation tracks. So when I see the combination of NASA and DATA I tend to look up and take notice. So when we get ‘NASA POWER’s PRUVE Tool Streamlines Data Validation’ (at https://www.earthdata.nasa.gov/news/blog/nasa-powers-pruve-tool-streamlines-data-validation) where we see “NASA’s archive of Earth observation and modeling datasets has an incredibly diverse range of uses, and assessing data uncertainty is a critical step toward ensuring the data and analyses are accurate, reliable, and trustworthy. Several factors, such as instrument calibration, atmospheric corrections, and land-surface albedo, can affect the quality of satellite data. For users working with solar and meteorological datasets, quantifying uncertainty is especially critical, as these data often inform decisions and policymaking at the community level.” And this introduction leads towards the two quotes “NASA’s Prediction of Worldwide Energy Resources (POWER) project, which provides datasets from NASA in support of energy, buildings, and agroclimatology decisions, developed a tool that enables users to assess data uncertainty for selected surface variables from POWER’s data catalog with corresponding surface measurements.” And “The cloud-based tool — the PaRameter Uncertainty ViEwer (PRUVE) — makes assessing data uncertainty more straightforward for users across disciplines and skill levels. PRUVE uses surface observed site meteorological data from the National Oceanic and Atmospheric Administration (NOAA) and surface radiation data from Baseline Surface Radiation Network (BSRN) to compare against POWER-provided surface meteorological and radiation data values. This user-friendly application gives users an opportunity to quickly confirm data validation through customizable queries.

So when we see “By creating the free, easy-to-use PRUVE tool, the POWER team instills an additional layer of trust, empowering users to tackle some of the most important long-term weather challenges facing our planet.” I feel doubt and I do know that this is in me, not because of what is promised, but consider the settings in the example we see “a student wanting to install a small wind turbine for a study project at their college. They are limited by size and cost, so they need to make sure the predictions and analyses are reliable. As part of the study, they can use wind and other historical data parameters available through POWER to forecast how much energy will be produced from the wind turbine system. The student wants to limit the level of uncertainty in their prediction calculations as much as possible.” All whilst we also see:

So where is the doubt? You see for the most there is no doubt in the powers that ‘reside’ within NASA, but when you see these facts, why this system is not ‘coexisting’ in the Google, IBM or Microsoft clouds? This system should (read: optionally could) be adjustable to these fake AI systems to smooth over validation and reduce error in whatever data there is. And I do know that it is not that simple, but consider the settings that are lacking now, the transference of these options might also fill the coffers of NASA and there is no way they don’t need that. And as my skeptical self realizes nearly all the data systems on the planet require additional layers of trust, but that might merely be me. 

So as I see it, nearly all data systems are set towards some setting that there is some side solution towards data validity, all whilst there is a direct need to make checking the validity of data a main priority. So what happens when this solution gets additional layers of data validation, in part in statistics to see if the validation sets statistical boundaries whether the data set in some normal way, but that limits the setting is an outlier is found, so how can that be validated? Then there are multiple factors where a value should behave in certain ways, but it would not be easy. I reckon that NASA could pull it off and it would be a tool that everyone needs. I merely wonder why no-one has considered it before. Now, I do understand that it is a tall order and I might be incorrect (read: full of it) but consider how meteorological numbers are achieved, consider that there will be error, but a setting that reduces error in validation. A system that reiterates the data given and considers whether validation passes of fails. A system like that could be made, but the issue are the outliers, so what makes an outlier valid, because if one outlier is wrongfully ‘deleted’ the data set could become invalid. So is this possible? I think that only NASA with its expertise could make such a system a reality, making data validation more readily available. Because no matter what verification process follows and whilst we await the coming of real AI, validation will still be a setting that is required in whatever data system comes to the surface of true AI. And perhaps the system will become a verification setting, both are required and neither system seems to be ‘correctly’ developed at present. It is a horrible conundrum, but it requires contemplating as such a system is needed by the time Real AI comes to all our doorsteps. 

The additional issues I see is that in this case the PRUVE tool has all these connecting data segments, but what happens when it is a little more complex? We have all our minds set to ‘connected’ data, but it isn’t that simple at times. Consider the ludicrous setting of length and shoe size. Now we can understand the setting of a 4’8” person with 17” shoes (he wishes), but is it out of the realm of possibilities? There is a girl named Shae, who claims she knows one person with that description (Game of Thrones joke). So how would you be able to validate this? Perhaps other data is required to make the clear distinction valid and how could such a system make validation reliable? As I see it, the biggest problem into validating data is being able to recognise the outliers. I see the deletion of outliers as a problem, the data loses reliability and verification become next to impossible. Its like watching a dataset limited without data from the Interquartile Range (or 3-Sigma Rule) and as I see it, whatever data you remain with makes actions like fraud detection close to impossible (unless that transgressor is extraordinary stupid). You see there is the ‘old’ premise that “Outliers can bias statistical estimates, causing inaccurate results in predictive models or misrepresentations in descriptive statistics.” I am not saying it is incorrect, but the absence of outliers could make the validity of that data a lot more dubious and finding this is a real challenge, so as far as I see it, That is a job for NASA (the keyword Superman was already taken by DC comics). 

So see this as a little trip on the brainstorming front, I definitely need a hobby and I am all out of licorice.

Leave a comment

Filed under IT, Science

Prolonging the idea

Two days ago I had an idea that could set a new technology marker towards Market Research. The idea is to use agentic set and seeded data for use of MR, but it was one that had a few kinks in that armor. There would be a tremendous amount of catering towards ethical borders and as I know the people in the world. They do not tend to align themselves towards ethicality (not when there are dollars involved). So my mind worked on the background on that problem and whilst I was traversing the Iceland Ring road (aka Route 1) around the 490 mile marker my mind figured something out. You see, why set this to ‘everyone’ whilst there is a setting that Amazon with their AWS and a population of 300-310 million active users could be the foundation of a research pool of panelists. So in short, they could ‘entice’ people to become part of an online panel. And for every questionnaire they complete, they get a token (aka Amazon dime), so ten dimes make for an Amazon dollar (aka 10% discount voucher) and so on. So it all depends on what the person wants to spend it on, the vouchers have a 6 month validity setting and the dimes have a year validity. So 10 questionnaires in a year and you have an optional setting with over 300 million active users. 

So, when an active user becomes a participant, a unique number is created in the Amazon system and attached to the person. It is hidden to all but the Amazon ‘insiders’ not even the client sees this number. So when a list of participants is created, this is all inside the Amazon system. So (as my humor goes) a list of American anti alcoholics who are not pregnant and have their own liquor license and that ‘search’ reveals the panelists available. They will get the OK signal and it is attached to their panel account. The Researcher will submit the questionnaire to the Amazon system (which is hosting options like Survey monkey and other solutions) and that questionnaire is set online. The researcher gets all the data with only the created Participant ID and that is the short of it.

So, the completion of the questionnaire is the participants signal with get that person the token, The data m moment gets the researcher all the data and the completion of that projects wipes the questionnaire into a bulk storage setting. The data delivery data is also maintained and that sets the entire process into a complete stage, I am in favor of keeping this all in other places (in Amazon) for historic purposes and that hands Amazon the keys to Market research, government research and that all should hand Amazon a nice additional revenue which it was never on its books (as far as I know). So in a day and age where people are search for some AI setting, I merely saw a tool to be created and handed to legacy data.

I reckon that this will give Amazon a few billions, and with over 300 millions people, many who will jump at the chance of sacrificing mere minutes to complete questionnaires for Amazon tokens, the options are nearly limitless, or so thinks me. And this is as I see it a global solution, all set to achieved data and the option to clean their data in the process.

Another hour, another dollar I say, but lets face it, it is Sunday, so it is this or contemplating the sins I have been involved in and I do not have that kind of time available, so designing new data solutions it is. Have a somewhat nice day today.

1 Comment

Filed under Finance, IT, Media, Science

The tradeoff

That is at times the question and the BBC is introducing us to a hell of a tradeoff. The story (at https://www.bbc.com/news/articles/c0kglle0p3vo) is giving us ‘Meta considers charging for ad-free Facebook and Instagram in the UK’, the setting is not really a surprise. On April 10th 2018 we were clearly given “Senator, we run ads” and we all laughed. Congress is trying to be smart over and over again and Mark Zuckerberg was showing them the ropes. Every single time. There was little or no question on this on how they were making money. Yet now the game changes. You see, in the past Facebook (say META) was the captain of their data vessel. A system where they had the power and the collective security of our data in hands. There was no question on any setting and even I was in the assumption that they had firm hands on a data repository a lot larger than the vault if the Bank of England. That was until Cambridge Analytica and in March 2018 their business practices were shown the limelight and it also meant that Facebook no longer had control of their ship of data, which meant that their ‘treasure’ was fading. 

So now we get “Facebook and Instagram owner Meta is considering a paid subscription in the UK which would remove adverts from its platforms. Under the plans, people using the social media sites could be asked to pay for an ad-free experience if they do not want their data to be tracked.” It makes perfect sense that under the guise of no advertising, the mention of paid services make perfect sense. This is given to us via the setting of “It comes as the company agreed to stop targeting ads at a British woman last week following a protracted legal battle.” I don’t get it, the protracted legal battle seems odd as this was the tradeoff for a free service. Is this a woke thing? You get a free service and the advertising is the process for this. As such I do not get the issue of “Guidance issued by the regulator in January states that users must be presented with a genuine free choice.” This makes some kind of sense, so it is either pay for the service or suffer the consequences of advertising. And lets be clear the value of META relies on targeted advertising. What is the use of targeting everyone for a car ad when it includes the 26% of the people who do not have a drivers license. There is the addition that these people need to have an income of over $45,000 to afford the 2025 Lexus RX $90,350 which is about 30%. We can (presumptively) assume that this get us a population of about 20%-25%, so does it make any sense for Lexus to address the 100% whilst only one in four or one in five is optionally in the market? Makes no sense does it? As such META needs to rely on as much targeted advertising as it can. And as you can see, The advertising model, known as “consent or pay”, has become increasingly popular. And at some point they were giving the people “But it reduced its prices and said it would provide a way for users not willing to pay to opt to see adverts which are “less personalised”, in response to regulatory concerns.” That is partially acceptable, but I have a different issue. You see, I foresee issues with “less personalised”, apart from gambling sites, there is a larger concern that even as Facebook (or META) isn’t capturing some data. There is the larger fear that some will offer some services and now care about capturing collected data. For example sites outside the EU (or UK). Sites in China and Russia like their social sites that collect this data and optionally sell it to META. You see, there is as I currently see it no defense on this. Like in the 90’s when American providers made some agreement, but some of them did not qualify the stage of what happened to the data backups and those were not considered, when they were addressed it was years later and the data had left the barn (almost everywhere). 

There is a fear (a personal fear) that the so called captains of industry have not considered (I reckon intentionally) the need of replacing and protecting aggregated data and aggregated results. Which allows for a whole battery of additional statistics. Another personal fear is the approach to data and what they laughingly call AI. It is hard to set a stage, but I will try. 

To get this I will refer to a program called SPSS (now IBM Statistics) so called {In SPSS, cluster analysis groups similar data points into clusters, while discriminant analysis classifies data points into pre-defined groups based on predictor variables.}

So to get data points into a grouping like income to household types, this is a cluster analyses.

And to get household types onto data points like income to household types, is called a discriminant analyses. Now as I personally see it (I am definitely not a statistician) If one direction is determined, the other one should always fail. It is a one direction solution. So a cluster analyses is proven, a discriminant analyses to income ill always fail and vice versa. Now with NIP (Near Intelligent Parsing, which is what these AI firms do) They will try to set a stage to make this work. And that is how the wheels come of the wagon and we get a whole range of weird results. But now as people set the stage for contributing to third party parsing and resource aggregation, I feel that a dangerous setting could evolve and there is no defense against that. As I see it, the ‘data boys’ need to isolate the chance of us being aggregated through third parties and as I see it META needs to be isolated from that level of data ‘intrusion’. A dangerous level of data to say the least.

There is always a downside to a tradeoff and too many aren’t aware of the downside of that tradeoff. So have a great day and try to have a half cup of good coffee (data boys get that old premise)

Leave a comment

Filed under Finance, IT, Media, Science

The revolving question

That is at times in almost everything the setting. We might all go nuts about ‘mismanaging’ settings and I am to a certain degree not impervious to that setting. But after writing ‘The losing bet’ (at https://lawlordtobe.com/2024/12/08/the-losing-bet/) I started to mull things over. You see, people like Sheikh Tahnoon bin Zayed Al Nahyan are not stupid. But there is a dangerous calm as people are given the questions and are given ‘a kind of answer’ and Microsoft is massively adapt in setting the stage to THEIR advantage and I suddenly realised a simpler setting. When was the question asked of Microsoft ‘What is AI?’ And ‘What is the premise of what you call AI?’ With ‘What is the data setting of AI?’ In this I reckon that some eyes will open. We see all settings of Ai mentioned, but the clear definition and a comparison to the setting that Alan Turing gave us 1950, moreover together with John McCarthy gave us the Turing test. So how far did people dig into this part of the equation? You might disagree with me on my stance of AI and that is okay. We do not all see eye to eye on a whole range of matters. But in this, in a Texas Hold’em style of business poker it becomes increasingly important to set the stage of definitions and hold them up to the light. In that game Microsoft doesn’t get to spin out of the stage ad blame it all on miscommunication. In that stage Microsoft has to hide into the margins or come out into the light. The second stage is likely and very pleasing to my ego.

You see, when people are part of a $1.5 billion investment there are people who are not pleased with that fact and they will nitpick any document handed to them. One of the oldest settings was ‘What are the definitions?’ Was in older days the way to see what players were up to and that stage got a little lost in populism and ‘fast’ presentations appeasing to the spending player. You might think that it is Microsoft paying, but you would be wrong. The UAE and G42 are investing time and resources to make it all work and I foresee that players like Microsoft (not just them) are trying to play fast and loose with definitions so that they can bank the first agreements and then turn back and hide behind ‘miscommunications’ after that fact. Which is why we have the clear setting of definitions. As such making all players answer that question gives a first setting. You see, there is no AI at present and that comes out at that very start. And no matter how clever LLM’s and Deeper Machine Learning is, the setting becomes data and who is responsible of that data. Now we get different players out and in the full-grown light. People like Sheikh Tahnoon bin Zayed Al Nahyan will then immediately see who is endangering the security of the UAE and they have no sense of humour at that point. No matter how some see the ‘opportunity’ of a life time, the moment the national pride comes into view of danger, the UAE will demand clarity on matters and I reckon some will ‘trivialise’ matters and when you ‘invest’ $1.5 billion there is an issue with trivialisation (which is why I referred to a Texas Hold’em style). Now some will say that I am bluffing and I want to be ‘inserted’ as a possible player. You would be wrong. I do not want to be linked to a player like Microsoft in any way. Google, Amazon, Adobe, IBM and Oracle definitely, Microsoft not at all. As such I am not anti-American (a claim that was thrown at me several times in the past). I am anti-stupid (mostly) and when you start trivialising $1.5 billion I see you as stupid, and no matter what I think of Microsoft, they are not overly stupid. In some things yes, in other things (like playing black letter law stages) not that much. 

But all that becomes moot when some players release the definition lists to all we will see how silly my thoughts are, because these definitions go through the entire project and there is no way they get changed unless all parties openly agree. Oh and before you think that this is a ploy. You might be right. You see, I do not know where China is at present ad I would live to find out. So what is better then Microsoft setting the entire definition list to paper and release it all? I reckon we will see a Chinese response less then 48 hours alter. 

The revolving question is an almost needed stage because definitions on paper is what matters, if it isn’t written down it doesn’t exist. That has been a matter long before the Prince by Niccolò Machiavelli. I reckon it goes back to the days of Gaius Julius Caesar Augustus (63BC-14). So this setting was known for 2000 years and with all the turbo presentations and innuendo I get the feeling it got lost in the woodwork of it all. As such I thought it was a great idea to remind people of that. 

Silly me, have a great day.

Leave a comment

Filed under Finance, IT, Law, Politics, Science

Is that so?

I was taken aback a little when I read the Khaleej Times yesterday. The article (at https://www.khaleejtimes.com/uae/old-smartphones-lying-in-cupboards-why-uae-residents-fear-recycling-their-devices) gave me pause to consider this. You see, when we see ‘Old smartphones lying in cupboards? Why UAE residents fear recycling their devices’ we can make all kinds of assumptions, but the clarity should be clear. There are a whole range of people who do not like their data up for grabs. The funny part is that Norton solved the issue over 40 years ago. Now we get a whole range of other options. But the simple sentiment is clear, and this is on Google and Apple to follow suit. 

I reckon that the solution will be similar for pretty much the same for both systems. The idea is that once you have transferred your mobile and data to the new phone, the old phone is pretty much redundant. So here comes Google/Apple and with their cable (in case of Google a USB-C) we can go to town, well, basically, the new phone can. 

So as I see it, the steps are as follows:

  1. Recharge old phone completely.
  2. Connect the recharged new phone to old phone.
  3. Instruct the new phone to wipe the old phone.
  4. Old phone gets wiped.

As the new phone gets the instruction to wipe the old phone, it will wipe, not delete to old phone.

This means that the new phone knows what the old phone is and will overwrite it with the value ‘EA’ (that was the old value). As such every bit off the old phone is overwritten with the value ‘EA’. It can be nearly any value, but this was the old setting I had in the 80’s. Because it is overwritten, there is nothing to undelete (read: restore). All data is wiped and no longer retrievable. In my case it was done 5 times (in case something is missed). As such the reference that the Khaleej Times gives us with “According to industry experts, fear of inappropriate use of data is one of the biggest deterrents to recycling devices among UAE residents” is no longer in effect. That being said, these ‘industry experts’ should know about this solution. And it is time for Google and Apple to be clear to the customers that their data is safe in this way. There are still a few other risks that people have, as they will readily put their data on social media, but their phones will be ‘saved’. 

What I don’t get is that both Google and Apple never touched on this subject before (as far as I know). Because iPads and other tablets face similar issues. I basically did this in my own way, in the more recent fields I did the same on my own way, but Google and Apple should have had these solutions in play already, so why was this skipped?

I cannot tell, but this article made me wonder why it was not taken care of. You see Peter Norton Computing has been around for 40 years, in 1990 it was taken over by Symantec and they had the goods, so why didn’t Apple and Google wake up to this setting? I never saw it (as far as I can remember) and it is not a weird setting. Consider all these corporate mobiles. At some point their IT departments will take a safe road by wiping their mobiles. So, why was this seemingly not done? I use the word ‘seemingly’ because it seems weird that it is only me who gets the idea. You see, doing a factory reset (as stated) gives us: “Doing a factory reset will delete nearly everything on the device”, it is the adaptation of the word ‘nearly’, I have an issue with that. Nearly isn’t everything, but what is not wiped? I reckon only the layer 1 people at Apple/Google can clearly identify them. There is still the setting that is set in motion. You could a ‘layered’ wiping of all memory through the new phone, optionally moving data from the old phone to the new phone (which Google/Android has). And doing it from phone to phone could optionally move ‘forgotten’ stuff to the new phone as well.

Oh, and that was the second part, the Khaleej Times never even mentions the factory reset part and the added GenAI settings that we see now more and more makes the wiping of old devices a lot more essential. In my story on August 11th 2024 which was ‘Setting of the day’ (at https://lawlordtobe.com/2024/08/11/setting-of-the-day/) gave us via Wired “Microsoft’s AI Can Be Turned Into an Automated Phishing Machine” we see the additional need for a complete wiping of all data. And as far as I can tell, there is no guarantee that some eager beaver will leave ‘discarded’ data alone. As such I feel that Apple and Google need to strap on their goods and get cracking to take the chance of certain solutions not to get a handle on your data.

I might not need it (I have other systems running) but the bulk of the users could use that little more protection. #Justsaying.

So let this be an idea that these two players get to seemingly rectify in the very near future. Darn, my Saturday starts in 92.4 minutes.

Leave a comment

Filed under IT, Science

The tables are starting to turn

This is a setting I always saw coming.It wasn’t magic or predestination, it was simple presumption. Presumption is speculation based on evidence, on facts. The BBC puts out a near perfect article (at https://www.bbc.co.uk/news/technology-67986611) where we see ‘What happens when you think AI is lying about you?’ There are several brilliant sides to it, as such it is best to read it for yourself. But I will use a few parts of it because there is a larger playing field in consideration. The first to realise is that AI does not exist, not yet. 

As such when we see ““Illegal content… means that the content must amount to a criminal offence, so it doesn’t cover civil wrongs like defamation. A person would have to follow civil procedures to take action,” it said. Essentially, I would need a lawyer. There are a handful of ongoing legal cases round the world, but no precedent as yet.

This is actually a much larger setting then people realise. You see “AI algorithms are only as objective as the data they are trained on, and if that data is biased or incomplete, the algorithm will reflect those biases” Yet the larger truth is that AI does not exist, it is Machine Learning or better, as such it took a programmer, a programmer implies corporate liability. That is what corporations fear, that is why everything is as muddled as possible. I reckon that Google, Microsoft and all others making AI claims are fearing. You see when you consider “The second told me I was in “unchartered territory” in England and Wales. She confirmed that what had happened to me could be considered defamation, because I was identifiable and the list had been published. But she also said the onus would be on me to prove the content was harmful. I’d have to demonstrate that being a journalist accused of spreading misinformation was bad news for me.” I believe it is a little less simple than that. You see algorithm implies programming, as such the victim has a right to demand the algorithm be put out in court for scrutiny. The lines that resulted in defamation should be open to scrutiny and that is what big-tech fears at present, because AI does not exist. It is all based on collected data and that data should be verified by the legal team of the victim and that stops everything for the revenue hungry corporations. 

In addition I would like to add an article, also by the BBC (at https://www.bbc.co.uk/news/technology-68025677) called ‘DPD error caused chatbot to swear at customer’. It clearly implies that a programmer was involved. If language skills involve swearing, who put the swear words there? When did your youngest one start to swear? They all do at some point. So what triggered this? Now consider that machine learning requires data, so where is that swear data coming from? Who inclined or instituted that to be used? So when you see ““An error occurred after a system update yesterday. The AI element was immediately disabled and is currently being updated.” Before the change could be made, however, word of the mix-up spread across social media after being spotted by a customer. One particular post was viewed 800,000 times in 24 hours, as people gleefully shared the latest botched attempt by a company to incorporate AI into its business.” Consider that AI does not exist, consider that swear words are somehow part of that library, then consider that a programmer made a booboo (this is always allowed to happen) and they are ‘updating’ this. A system is being updated to use a word library. Now consider the two separate events as one and see how much danger the revenue hungry corporations have placed themselves in. When you go by ‘Trust but verify’ we can make all kinds of assumptions, but data is the centre of that core with two circles forming a Venn diagram. One circle is data, the other is programming. Now watch how big-tech is worried, because when this goes wrong, it goes wrong in a big way and they would be accountable for billions in pay outs. It will not be a small amount and it will be almost everywhere. The one case of a defamed journalist is one and in this day and age not the smallest setting. The second is that these systems will address customers. Some will take offence and some will take these companies to court. So how much funds did they think that they could safe with these systems? All to save on a dozen employees? A setting that will decide the fate of a lot of companies and that is what some fear. Until the media and several other dodo’s start realising that AI doesn’t yet exist. At that point the court cases will explode. It will be about a firm, their programmer and the wrong implementation of data. I reckon that within 2-3 years there will be an explosion of defamation cases all over the world. The places relying on Common Law will probably be getting more and sooner than Civil Law nations, but they will both face a harsh reality. It is all gravy whilst the revenue hungry sales people are involved. When the court cases come shining through those firms will have to face harsh internal actions. That is speculation on my side, but based on the data I see at present it seems like a clear case of  precise presumption which is what the BBC in part is showing us, no matter how courts aren’t ready. In torts there are cases and this is a setting staged on programmers and data, no mystery there and that could cost those hiding behind AI are facing. It is merely my point of view, but I feel that I am closer to the truth than many others evangelising whatever they call AI.

Enjoy the weekend.

Leave a comment

Filed under Finance, IT, Law, Science

One bowl of speculation please

Yup, we all do it, we all like to taste from the bowl of speculation. I am no different, in my case that bowl can be as yummy as a leek potato soup, on other days it is like a thick soup of peas, potato with beef sausages. It tends to depend on the side of the speculation (science, engineering or Business Intelligence) today is Business Intelligence, which tends to be a deep tomato soup with croutons, almost like a thick minestra pomodore. I saw two articles today. The first one is seen (at https://www.bbc.co.uk/news/technology-64917397) and comes from the BBC giving us ‘Meta exploring plans for Twitter rival’, no matter that we are given “It could rival both Twitter and its decentralised competitor, Mastodon. A spokesperson told the BBC: “We’re exploring a standalone decentralised social network for sharing text updates. “We believe there’s an opportunity for a separate space where creators and public figures can share timely updates about their interests.”” Whatever they are spinning here, make no mistake. This is about DATA, this is about AGGREGATION and about linking people, links that too often Twitter has and LinkedIn and Facebook does not. A stage where the people needs clustering to see how to profiles can be linked with minimum connectivity. It is what SPSS used to call PLANCARDS (conjoint module). In this by keeping the links as simple as possible, their deeper machine learning will learn new stage of connectivity. That is my speculated view. You see this is the age where those without exceptional deeper machine learning, new models need to be designed to catch up with players like Google and Amazon, so the larger speculation is that somehow Microsoft is involved, but I tell you now that this speculation is based on very thin and very slippery ice, it merely makes sense that these to will find some kind of partnership. The speculation is not based on pure logic, if that were true Microsoft would not be a factor at all.

But the second article (from a less reliable source is giving us (at https://newsroomodisha.com/meta-to-begin-laying-off-another-11k-employees-in-multiple-waves-next-week/) so they are investigating a new technology all whilst shedding 11% of their workforce. A workforce that is already strained to say the least and this new project will not rely on a dozen people, that project will involve a lot more people, especially if my PLANCARDS speculation is correct. That being said, if Microsoft is indeed a factor, the double stump might make more sense, hence the larger speculative side. Even as the second source gives us ““We’re continuing to look across the company, across both Family of Apps and Reality Labs, and really evaluate whether we are deploying our resources toward the highest leverage opportunities,” Meta Chief Financial Officer Susan Li said at an Morgan Stanley conference on Thursday. “This is going to result in us making some tough decisions to wind down projects in some places, to shift resources away from some teams,” Li added.” Now when we consider the words of Susan Li, the combination does not make too much sense. The chance of shedding the wrong people would give the game away, yes Twitter is in a bind, but it will add full steam in this case and they will find their own solutions (not sure where they will look), a stage that is coming and the two messages make very little sense. Another side might be pushing it if Meta is shedding jobs to desperately reduce cost, which is possible. I cannot tell at present, their CFO is not handing me their books for some weird reason.

Still, the speculation is real as the setting seems unnatural, but in IT that is nothing new, we have seen enough examples of that. So, enjoy your Saturday and feel free to speculate yourself, we all need that at times to TLC our own ego’s.

1 Comment

Filed under Finance, IT, Science

They just won’t learn

That happens, people Incapable of learning. IT people listening to salespeople because these sales people know what buttons to push. Board members pushing for changes so that their peer will see that they are up to speed on the inter-nest of things (no typo) and there are all other kinds of variation and pretty much every company has them. Even as Australia is still reeling from the Optus debacle, Telstra joins the stupid range (at https://www.abc.net.au/news/2022-10-04/telstra-staff-have-details-hacked/101499920). So explain to me why an HR system needs to be online? OK, you will get away with that and there is a need for some to access it, but in what universe does this need to be so open that EVERYONE can get to it? That is the question we see raised with ‘Telstra data breach sees names and email addresses of staff uploaded online’, a blunder of unimaginable proportions. On the other hand, Telstra will be bleeding staff members left, right and forward pretty soon. You see, this list is well desired by over a dozen telecoms in Europe, North America, the Middle East and Asia. They all need staff all over the place and now their headhunters know EXACTLY where to dig. Even as the article gives us two parts. The first part is “a third party which was offering a rewards program for staff had the data breach in 2017” as well as “Telstra has not used the rewards program since 2017, the spokesperson said” in all this the question that matters are not asked. We get Bill Shorten trying to change the conversation back to Optus with: “get the information so I can stop hackers from hacking into government data and further compromising people’s privacy”. The massive part is “Why was a reward program not used for 5 years still linked to HR data?” It seems that ABC does not ask this and the others do not either. So even if we get “Attorney-General Mark Dreyfus has said he will review Australia’s privacy laws and tighter protections could be brought in by the end of the year” Yet the larger question remains unanswered. How to protect these systems from STUPID people? A reward system that has a direct link to the HR data and was not used for 5 years is stupid, plain and simple stupid. As such this affects their IT and their HR department. Yet the people (politicians and media are not asking these questions are they? They let Labor loser Shorten change the conversation. Oh, do not worry we are not even close to done with Optus, but the setting that the conversation is pushed away from Telstra allegedly implies that Telstra has too large a hold on Media and politicians. So whilst the media allowed Telstra to hide behind “while the data is of minimal risk to former employees” they fail to see the larger picture. In an age brain drains these people are worth their eight in Lithium (more valuable than gold) and it seems to me that an employment database of 30,000 telecom people will be eagerly mined in the three earlier mentioned regions. These hackers were smart, they can get a million easily (over 10-15 customers) and these customers will not care where that data comes from, they need personnel and they needs them now. So it seems that certain people just ill not learn and there is no hiding behind “in an attempt to profit from the Optus breach” Telstra claims to be so superior, of that is so either the hack would not have affected them, or these systems are in a worse shape than ever before and that is also missing from the article. Two competitors successfully hit by the same flaw? It seems that too many people are asleep at the wheel. And no one is asking the right questions, not even the media, why is that?

Leave a comment

Filed under IT, Media, Politics

As banks cut corners

There was news on ABC news, it was not really news, this was a stage that I saw coming a mile away and that was 5 years ago, yet the speed at which this is procreating is cause for concern. The article ‘Protecting yourself from phone porting and SIM card scams’ (at https://www.abc.net.au/everyday/protecting-yourself-from-phone-porting-and-sim-card-scams/100421586) is not just this, the entire COVID registration issues are making things worse. When we take notice of ““At 5:55pm, I got a text message from my telco. It said, ‘Hi, received your port out request for this service,'” he says. “By the time I tried to call them, my phone already went to SOS only. Before I could even react, my number was gone.””, you might think that this is an isolated case, but it is not, when we add ““They had my customer ID [for online banking], and you can do a password reset if you have the customer ID and mobile number,” he explains. “It was really professional. I had daily limit of $10,000, so they sent $10,000. They bypassed that limit by opening another account inside my account, which you can do online, and then they transferred another $10,000.”” There is. massive flaw, the banks refer to this as being customer friendly, I personally see it criminal friendly. All kinds of level of checks and balances are left out of the equation and for now we see banking party-lines that these matters are seldom, the people are protected and it can be reversed. Yet in 5G, within the next 2-3 years the costs will go beyond what the banks find reasonable and we are left with the costs, we are left with the impact and we are left outside in the cold. That is an almost given and matters are merely getting worse. 

The banks (to cut corners) are setting up more and more to be done online, all whilst proper security is lagging and there is a whole range of actions that will not and should not be allowed. I had to check and make sure that online banking was DISABLED, it makes a few issues a bit more hassle, but compared to the damage I could face 2-5 times a year it is a no-brainer. This is a mere beginning when we consider “If I want to change providers, before the [new] standard was put in place, I just had to give my name, my date of birth and my address,”, all whilst the increased made “scammers ask a victim’s existing telco to switch the number to a new SIM”, the effect is the same and because some players are cutting corners the consumer is left with the hardship. There is no easy way here and I get that, yet there is a larger stage of checks and balances missing all whilst cost cutting parties make ‘customer friendly’ needs, whilst parses of verification needs to be at the centre of this all and it is getting worse. 

Why is it getting worse?
Well, There were 5 attempts to scam me in the last 8 weeks, 2 of them were so good that I could not find anything wrong with the information and sources given, more importantly in one case I had to make a separate call to PayPal to make checks to make sure, they had become that good and I know what to look for, yet I have an ace up my sleeve (which I will not reveal here), it stopped numerous scams from being completed.

The first is that YOU NEVER EVER USE A LINK GIVEN! You find the number, the generic number of for example PayPal and you reference the numbers that you write down, they were ready to tell me that no such activity exists. If you click on any link you are causing damage to yourself. But the two (including PayPal) were so well done that finding the differences were close to impossible and I know what to look for. A consumer will have little to no chance at all. 

And matters are getting worse, because 5G will enable the scammers to approach well over 500% in the same time, their revenue goes up and at some point it will cost us, insurances will soon stop paying out and then it will become a much larger problem. You either pay an annual fee, or lose your money. I feel that this is where it is going. 

So whilst we see “to enable to SIM port or swap, scammers will need personal information, like your name, address, and date of birth” COVID give them the name and phone number, the phone number can in some cases link to an address and then only the date of birth is missing and with all these transgressed data bases. Now consider all these places that got hacked, which have a birthdate? Which have a phone number? And the image below completes the picture. 

We see three sources required to get all the data they need and they keep on adding data, data you freely give away in apps, data they captured, data from hacks on the dark web and it is BIG BUSINESS, in the example it is one person with the $10,000 target, now consider 750,000 in the UK alone, 500,000 in Australia, 35,000,000 in the US and consider that $10,000 was a small jab, even smaller would work for them, like a mere $500, with these numbers these criminals become billionaires within a month and these actions need to be done fast. They have per nation 3-4 days at the most, so within 2 weeks they are looking at millions and with 5G they can get more and they can get there faster. Do you still think I am kidding? Take a good look at what data you entered in ANY app or any website, now consider that these people are doing nothing more but to add data as much as they can, at some point (within a dozen sources) they have enough data to port you, to capture your bank accounts and to make changes to your life. They merely needed some time, a $2500 computer and a decent internet connection, the pay off would be a 7 figure number and with the speed they are tracked they would be living large in another country with nothing attached to them. That is the current reality and the level of checks and balances that are missing is just too unbelievable for words.

Enjoy your bank account (for as long as you still have it)

2 Comments

Filed under Finance, IT

In retrospect

I (for the most) react to facts, as I do now, but the results are not anticipated new facts, what comes next is pure speculation, no matter how correct I think I am, it is speculation and that needs to be said up front. Even as I start now, my mind is racing through speculative ideas and options in other realms (science realms no less), but I digress. The thoughts started with a Reuter article called ‘Analysis: Biden’s COVID-19 strategy thwarted by anti-vaxxers, Delta variant’, the article (at https://www.reuters.com/world/us/bidens-covid-19-strategy-thwarted-by-anti-vaxxers-delta-variant-2021-07-29/) gives us “Dr. Peter Hotez, a vaccinologist and dean of the National School of Tropical Medicine at Baylor College of Medicine, said the Biden administration’s acknowledgement of the “terrible impact” of the anti-vaccine movement was important, but he said the government could do more. “Anti-science is arguably one of the leading killers of the American people, and yet we don’t … treat it as such. We don’t give it the same stature as global terrorism and nuclear proliferation and cyber attacks,” he said”, it might be a mere quote, it might be the paraphrasing from the article writer, which is not a negative view, but it got me thinking. When we see the anti-vaxxer movements in the US and EU, they are uncannily effective, they are almost too effective. For the most and proven since the 90’s, the anti-vaxxers are either religiously inclined like the Dutch people in Giethorn (their ‘sort of’ version of Amish) or loons (often people who are one shade away from being absolutely bug-nuts). In the first, these people are driven and they are also self isolationists, it is merely about them and their community, it makes them a danger to themselves, not to others. The second group is a danger to all, but often so stupid they merely hit other stupid people. These anti-vaxxers are driven, not merely by intelligent people, no, they are driven like they are terrorist tools, like biological DOS agents and they are growing. These people are not accepting any scientific evidence, they forward non-scientific papers as ‘their’ evidence and they are not merely more effective, they are almost centrally driven by a similar source. 

In the UK the Guardian is giving visibility to Kate Shemirani, in the USA we see Alabama Curt Carpenter and the list grows. Someone is somehow fuelling this, yes this is speculative and this is not merely the power of social media, someone had months to prepare the weaker minded and target them in a direction, limelight seeking nobodies all wanting their limelight with as large as an audience as possible. The evidence is not clear and as such this is speculation, yet consider the timelines of each of these Anti-vaxxers, what their audience was a year ago and each month after that. This goes beyond buying likes on places like Facebook. Some people are fuelling these ‘bright’ illumination spots and they are not done, even as they are retracting their ‘assistance’ there is still a digital footprint and it is now diminishing. Yes, I admit upfront that my view is speculative, but my speculation fits the profile, are the US and the EU under attack from bio-terrorists? You might think that they are not the same, but there you would be wrong. In this I grasp back to a writing from 2012 called ‘A Proposed Universal Medical and Public Health Definition of Terrorism’. Here we see “We propose the following universal medical and public definition of terrorism: The intentional use of violence — real or threatened — against one or more non-combatants and/or those services essential for or protective of their health, resulting in adverse health effects in those immediately affected and their community, ranging from a loss of well-being or security to injury, illness, or death”, in this, if even one of my speculations are proven, these anti-vaxxers become complicit in acts of terrorism. Did you even consider that? Now, there is a dangerous fence. I am not debating THEIR right to be anti vaccinated. If they die, they only have themselves to thank, just like Curt Carpenter. Yet by attacking science by non-science and debunked non-facts, the setting changes and that is where we are now. What should have been a straight path to recovery is now a much larger issue. The delay is not on President Biden, and now that we can optionally see that the US is yet again under terrorist attack his priorities need to change, attacking big-tech is futile and counter productive, the laws needs adjusting free speech, it needs to be validated by accountability. 

And for the love of god, can some well trained data analyst please take a look at the timeline of these anti-vaxxers? I think it is time to look at timelines here and that is when my brain went into some sort of overdrive. It goes back when I designed an intrusion system that stayed one hop away from a router table between two points and to infect one of the routers to duplicate packages from that router on that path, one infection tended to not be enough, 2-3 infections needed to be made so that the traffic on that route between two points could be intercepted, I called it the Hop+1 solution, I came up with it whilst considering the non-Korean Sony hack. That  thought drove me to think of an approach to find the links. In the first we most likely need to find on where and when they accessed the dark web, then we see another part, because if we can find their access, we can optionally see others too, when we have that list and we can correlate it to other anti-vaxxers we have an optional pattern for action. No matter how this is seen it will be staged towards my speculation, something that needs proof, proof is required to give validity to actions that follow. I believe that I am correct, but I admit that it is a speculative push in a path towards thinking something is what I personally think it is, not a path towards evidence, evidence needs to be found and the evidence that is made to fit the solution, is no evidence, it is like stating that there is a linear relationship when you only have two plot points. A pattern of evidence is required, it is always about the patterns. 

So when I look at the ‘in retrospect’ part, I am wondering when the connections were there in the early stages and I also wonder why the others are not on that path yet (or seemingly yet). The media is only partly to blame, yes they give limelight, but that was their job from the early days, like the people exploiting Google cookies, the media can be exploited too, seeking the limelight is not a crime, but in conjunction with a terrorist agenda we are on new shaky grounds, and that is the problem, any law eagerly over-quick created is pointless whilst inaction is useless, caught between two rocks whilst the floor is not lava it is the ever exploiting media, exploiting for clicks, for visibility and circulation, whilst calling it ‘the people have a right to know’. This has the option of heading into a really bad direction soon enough. Will it? I have absolutely no idea.

Leave a comment

Filed under IT, Military, Politics, Science