Klaus Stoll: In AI We Trust!?

Picture of Klaus Stoll

Klaus Stoll

Digital Citizen

When it comes to Artificial Intelligence (AI), there is a widespread fear that AI machines will “take over” and dominate humanity. Today, we should be concerned when governments and digital corporations use AI to replace trust as the fundamental value and principle in the digital domain. 

Trust is the main currency of the Internet.

In a previous post[i] , I mentioned the pivotal role trust plays in the digital domain: “Trust is the main currency of the Internet, as without it, even the best-engineered digital technologies become useless. Digital business is trusting business. Lose users/customers’ trust, and they may never return. To overcome the problem, corporations use marketing to replace ethics as the source of trustworthiness. There comes the point where the negative impact and disruption of a technology and its related product or service becomes so large that even the most powerful corporation can no longer maintain its value proposition.”

Intelligent?

There are many ways to define and categorize AI and at the moment it is not always clear what we are talking about.[ii]  Even the term AI is misleading, “machine learning” is a better description of what is going on. Today’s AI is anything but intelligent, it uses statistical predictions to “learn” and mimic “intelligence.” [iii] The term “machine learning” takes on a new meaning as AI teaches humans to think like the machine. The goal is to use data to replace what the machine cannot process or control: critical thinking. We visit a library to find out the answer to a question you have. The librarian will bring you all the books that, according to her classification records and experience, might contain the answers. We still have to  use our critical thinking to determine what the answer is. AI replaces critical thinking with data processed by algorithms using statistical probability and directed by with pre-determined values.

More Spin than Substance

“Machine learning “ does not have the same ring , is not drawing so much attention and is not as marketable as calling it “Artificial Intelligence.” Today’s AI is more PR spin than substance. AI stands in a long row of digital gamechangers such as IoT and Blockchain. Like all of the above, what we call AI is a development step from older technological models such as data driven marketing and self-driving cars.[iv] The hype is needed to get the attention of marketplaces, investors, customers, politicians. It does not happen by accident; it is planned and executed for specific purposes. Even “crying wolf” and warning against the dangers of AI by those who were instrumental in its development serves to increase their power to control the narrative.[v]

Mixing Facts with Science Fiction

The lack of critical thinking results AI  answers that typically mix facts with science fiction.

When the author asked AI chatbot ChatGTP: “How can AI invade my privacy?” it answers: “AI has the potential to invade your privacy in several ways…”, and continues to list several methods  how, such as: data collection, surveillance, predictive analytics, and hacking. (The full text of all ChatGTP quotes is available in the endnotes[vi])

Asked how AI can protect us from AI invading our privacy, the answer becomes very vague[vii]. Completely ignoring that the harm is done here and now, we are referred to technological solutions somewhere in the future: “As AI technology becomes more advanced…”

Who made a hole in my wall?

The proprietors of AI are like burglars that routinely invade the privacy of our homes. They prefer to dig a hole in the wall as this is easier than breaking through the securely padlocked front door. They justify this by arguing that it is your responsibility to secure the walls and not just the door against their tools: 

ChatGTP: “To protect your privacy from AI, it’s essential to be aware of the data that you share online and take steps to limit it. You can also use privacy-focused tools such as VPNs, anti-tracking browser extensions, and privacy-focused search engines. Additionally, you can regularly review your privacy settings and be cautious about giving apps and services access to your data”.

AI applications have a tough time understanding that breaking into the house in the first place might be wrong, as the stolen data is ostensibly used to serve their human customers better. The victims of AI can take solace in the fact that help is on the way as the burglars are busy to develop security devises for your walls and help will eventually arrive: 

ChatGTP: “Overall, the use of AI to protect privacy is still in its early stages, but researchers and developers are actively exploring new techniques to enhance privacy protections in the era of AI.”

A New Wrapping Paper for Old Misdeeds 

This brings us back to the topic of trust. It is no secret that digital marketplaces do not want to be hampered by annoying questions concerning their ability to act in accordance with their responsibilities. Huge parts of the digital ecosystem seem to be incapable of generating the trust they need to be responsible actors who respect the rights of others and seek the common good. The rights of others and tech developers’ own responsibilities are seen as obstacles to digital innovation and the power and profits that come with it. Consequently, their answer is that trust needs to be replaced by something controlled by private or governmental entities, and something that can be presented and sold to the public as equal and even better than old-fashioned human rights and responsibilities. That something is data. 

Once AI has secured data by whatever means, AI assumes the right to do with it whatever it wants. The answer AI gives to the question how to protect your privacy is: “If you don’t want to get hurt, it is your responsibilities to keep your data away from me!” 

The main purpose of today’s AI is not to serve humanity but to develop machines that offer infinitely greater abilities to manipulate facts and people according to the political and economic needs of their proprietors who will hide the truth behind an insurmountable wall of manipulated data.

“I’m sorry, Dave. I’m afraid I can’t do that.”

By instrumentalizing data, corporations and governments only do what they did before, but now only better. Like new wrapping paper for old misdeeds, AI represents data as a better solution to values in determining what is right and wrong. The proprietors of AI machines seem to say: ”If you cannot trust us, trust the machines we create.” Who can argue with facts, generated by a machine with abilities that see infinitely greater than that of any human? 

We need a new ethical approach to AI to avoid having the conversation Dave Bowman had with the AI entity HAL 9000 in the1968, Stanley Kubrick film: 2001: A Space Odyssey [viii] 

Dave Bowman : Open the pod bay doors, HAL.

HAL : I’m sorry, Dave. I’m afraid I can’t do that.

Dave Bowman : What’s the problem?

HAL : I think you know what the problem is just as well as I do.

Dave Bowman : What are you talking about, HAL?

HAL : This mission is too important for me to allow you to jeopardize it.

Dave Bowman : I don’t know what you’re talking about, HAL.

HAL : I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.[ix] 

The proprietors of today’s AI define the mission as too important to allow us to jeopardize it. Planning to disconnect, that’s something they cannot allow to happen.

The Domain Name System, (DNS), as an Example of Ethical Engineering

The machines we engineer always have an ethical dimension as they reflect and contain human views and values. The question is not if, but which, ethical values they adhere to. The intelligence of a machine should not be determined by its prowess to crunch data but its ethical abilities. Making ethical decisions is infinitely more difficult than crunching data.

The DNS and the Root Server System, (RSS), in which it is housed, might be able to show us the direction towards ethical intelligent engineering. They reflect clear universal ethical principles such as diversity, integrity, transparency, collaboration, engagement, autonomy , independence, neutrality, and impartiality, resulting in stability, reliability, resiliency and most importantly trust. I can trust the DNS because it has ethical values engineered into it. I cannot trust a chatbot because he lacks universal ethical principles.

We need to stop looking at AI as a proprietary technology controlled by the few effecting the lives of the many. Using anonymized data, AI needs to be seen as a common good. It will require its own governance mechanisms and we can only hope that we will apply the lessons learned from the governance of the DNS.[x]

Endnotes:

[i] See: https://circleid.com/posts/20230424-for-icann-to-have-a-future-it-needs-to-take-human-rights-considerations-seriously

[ii] There are two main AI models. The first model proposed three categories:

Artificial Narrow Intelligence (ANI), where the machine is just reactive and perform preprogrammed repetitive tasks.

Artificial General Intelligence (AGI), where machines have the same ability as humans, and

Artificial Superintelligence (ASI), where the machines perform better than humans.

A model proposed by Arend Hintze, researcher and professor of integrative biology at Michigan State University, stands for the second approach. His four stages are: 

Reactive machines. Reactive machines are AI systems that have no memory and are task specific, meaning that an input always delivers the same output.

Limited memory. In this stage the machines have limited memory their algorithm can use to learn and get “smarter” as more date it is able to access.

Theory of mind. In theory of mind stage the machine perceives the world around it has the potential to understands and reacts to the world and other entities thoughts and emotions.

Self-awareness. In this final stage the machine is aware and conscious of its existence.

All the models have fundamental differences and problems. According to the first model existing AI has already reached the stage of Artificial Superintelligence (ASI), as AI is capable to analyze larger amounts of data then a human ever will be capable of. With regard to the second model, it can be argued that AI is already self-aware and understand and reacts to the world around it, as these qualities are contained in the code used by their creators.

[iii] Open AI describes its ChatGPT as follows: “ChatGPT is a language model developed by OpenAI that uses the GPT (Generative Pre-trained Transformer) architecture to generate text based on a given prompt or input. It works by processing the input text using deep learning techniques to understand the context and meaning of the text, and then generates a response based on that understanding.

To be more specific, the model is pre-trained on a large corpus of text data and uses unsupervised learning techniques to learn patterns and relationships in the language. This pre-training allows the model to have a strong understanding of grammar, syntax, and semantics, and enables it to generate coherent and contextually appropriate responses.

When a user inputs a prompt or question, ChatGPT analyzes the text and generates a response based on its understanding of the input. The model uses a probability distribution to generate multiple possible responses and then selects the most appropriate response based on the context and other factors.

Overall, ChatGPT uses a combination of pre-training, deep learning techniques, and probabilistic modeling to generate human-like responses to user inputs.”

[iv] 2018  the author  wrote a post on self-driving cars and since then there has been not much of a development and the fundamental questions still remain. See: Klaus Stoll. “Holiday Read: Questions of a Global Digital Citizen Before She Enters Her First Self Driving Car.” https://circleid.com/posts/20181221_questions_of_a_global_digital_citizen_before_entering_self_driving We are made to believe that these technologies will develop rapidly, but the reality is often different.

[v] See call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.  https://futureoflife.org/open-letter/pause-giant-ai-experiments/ , or the recent NY Times headline “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead”, https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

[vi] This and the following exchanges took place between ChatGTP and the author,  05/01/2023 2.08 pm EST.  Author: How can AI invade my privacy?

ChatGPT: “AI has the potential to invade your privacy in several ways, such as:

  1. Data collection: AI algorithms rely on data to train and make predictions. If your personal information, such as your browsing history, social media activity, and location data, is collected by companies or apps that use AI, it can be used to create a detailed profile of you.
  2. Surveillance: AI-powered surveillance systems can use facial recognition and other biometric data to track your movements and activities, even in public spaces.
  3. Predictive analytics: AI algorithms can be used to make predictions about your behavior, preferences, and interests based on your online activity and data collected from other sources.
  4. Hacking: AI-powered hacking tools can be used to breach your devices, steal your personal data, and even impersonate you online.

ChatGTP: “As AI technology becomes more advanced, there are several ways in which it can be used to protect your privacy and prevent AI from being used to invade it:

  1. Privacy-preserving AI techniques: Researchers are developing AI algorithms that can perform computations on encrypted data, which ensures that the sensitive data remains private even when it’s being used by the AI system. This can prevent your personal information from being exposed to the AI system.
  2. Federated learning: This is a technique where AI models are trained using data that is distributed across multiple devices or servers, without the data leaving the devices. This can prevent the centralization of data and reduce the risk of data breaches.
  3. Differential privacy: This technique involves adding noise to data sets to obscure individual data points, while still preserving the overall patterns and trends in the data. This can help prevent individual privacy from being compromised while still allowing the data to be used by AI algorithms.
  4. AI-assisted privacy tools: AI can also be used to develop privacy-enhancing tools such as browser extensions, ad blockers, and anti-tracking software. These tools can help users protect their online privacy by blocking trackers and other forms of data collection.”[vi]

[vii] Many AI answers are so vague that when you run them through text editors, the editors often suggest “clarification”.

[viii] See: https://en.wikipedia.org/wiki/2001:_A_Space_Odyssey_(film)

[ix] See:  https://www.youtube.com/watch?v=Mme2Aya_6Bc

[x] See: Klaus Stoll: “It’s Time for a Better Vision of Internet Governance: From Multistakeholderism to Citizenship,

https://circleid.com/posts/20220328-time-for-a-better-vision-of-internet-governance-multistakeholderism-to-citizenship

Total Views: 529 ,
Scroll to Top