Freedom endAIngered? – Artificial Intelligence, its flaws, our fallibility

What happens when we do not understand the thought processes of an intelligence anymore? And how much responsibility over our human society could or should be entrusted to such an intelligence?

The Golem, Talos, Frankenstein‘s Monster, Skynet, HAL 9000, GladOS. Just a few references from various epochs of humman history that show that artificial intelligence has enticed our fantasy throughout the ages. The question what it is that makes humans and human intelligence unique, and the fascination to create something that gets close to this intelligence or even exceeds it – it has been a constant companion for thousands of years. It is obvious why we ask this question: As long as we rest assured that our intelligence is unique, we can tell ourselves that the claim to human hegemony on this world and in the universe we have explored so far has validity. We may be close to 8 billion people with different personalities, opinions, and convictions. Nevertheless, we share human thought processes. These processes can result in war or just in people shaking their head in disbelief, but at their core they are human and – given openness and sincerity of the thinking individual – potentially penetrable. What happens when we do not understand the thought processes of an intelligence anymore? And how much responsibility over our human society could or should be entrusted to such an intelligence?

Before we look deeper into these questions, a brief definition of the term „Artificial Intelligence“ is necessary. Unfortunately, this has become increasingly difficult, as the term found its way into ubiquitous marketing vocabulary. Nowadays it is used for anything from the most advanced application of Neural Networks to basic automatisation processes. Fundamentally, an AI is nothing else than the attempt to simulate (aspects of) human intelligence through intelligence displayed by machines. Human intelligence is based on our sensory experiences, learning processes, and environmental adaptation. Among these traits are logical thinking and decision making.

Today, we find numerous areas in which artificial intelligences are applied to simplify and accelerate processes or make them more efficient than possible to humans. Sometimes, they are just very practical: Thanks to voice assistants we can enjoy a tailor-made Spotify playlist in our smart home after our Tesla got us home safely with a considerable amount of autonomy. We book our holidays assisted by a chatbot, and, to get somewhat close to our beach-body goals, our fitness app suggests workouts and a meal plan optimised to our needs based on the data by our smartwatch. The selfies we take at our dream destination are beautified so we need not worry if we did not follow our training schedule as strictly as we should have. Life 4.0 is convenient! And that are just some enduser-focused examples. AIs are capable to regulate traffic, predict maintenance work in the industry sector, minimize the use of pesticides and fertilizers in agriculture and are already in use for the early detection and spreading monitoring of diseases.

As diverse as the areas of application are – all AIs of today have one thing in common: They are so-called „weak“ AIs, i.e. they are only trained for one particular area of application. This training is enabled by a huge amount of data. The better the data, the better the AI. We can recognize a first issue here, when dealing with AIs: It is really tempting to collect as many data as possible. The best AIs grow from the best data banks – a potential risk for the privacy of users. A perpetual competition among AIs for the best data seems unavoidable as the situation seems like a petridish for potential data monopolies. Interestingly, it is a different AI-process – the creation of synthetic datasets tot rain AIs – that could avoid this development in the long run. Another problem when dealing with AIs and the necessary data processing is the well-known phenomenon called „garbage in, garbage out“. If the quality of the data that are used to train an AI is not high-quality, the AI will not be able to deliver optimal results. Quality in this context means – among other things – freedom of human bias. One example for the malfunction of an AI due to a lack of fringe conditions to avoid bias, is Amazon‘s recruitment AI. As the tech-indsutry is still male-dominated and Amazon was relying on data of previous employments to teach its AI, the AI learned to put women at a systematic disadvantage. Amazon stomped the project. However, it serves as a warning: to develop a „just“ AI it is necessary to take many more factors into account than we realize, even for simple applications. AIs recognize patterns we barely see (e.g. different use of language between men and women) and develop decision processes based upon these patterns – it is their strength and weakness. Therefore, a high level of self-reflection is necessary when dealing with AIs and it makes the willingness to engage in a continuous learning process a predicament.

AIs know no morals. They do not question decisions. They know no guilt. This puts the developers and users of AIs in the position to deal with the fundaments upon which an AI is acting and to figure out who takes how much responsibility for what aspect of its actions. Of course, Amazon had to shut down the aforementioned recruiting program as it was discriminatory. But what about the real moral dilemmas AIs will be facing sooner or later? Even if they will help in avoiding countless accidents – self-driving cars will encounter the “Trolley Problem”: If damage to a person is inevitable the system needs to make a decision whose protection deserves priority. There needs to be a ranking consisting of protection of passengers, protection of groups of people, protection of adults or children, protection of passengers in other vehicles. A regular driver is not burdened with this dilemma: The human inability to take deliberate decisions in such a situation protects him/her. With regard to autonomous decisions of AIs, we will face interesting developments in liability laws, as the questions of responsibility need to be settled.

It goes even further: the judicial system and prosecution show increased interest in using the capabilities of AIs. Racial profiling is prohibited in many countries. However, we can already suspect today that predictive-policing-algorithms and systems like the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) – a system rating the probability of relapse for criminals on which judges in the US adjust the severity of punishments – are immanently racist due to their data basis. A societal and political debate about our determination to use such methods is overdue. Proponents see the great potential of protecting potential (or even probable) crime victims and to show extraordinary clemency to those who have a good prognosis. On the other side we must realize the threat of a de-individualisation of jurisdiction and the crumbling of one of the pillars in our judicial systems: In dubio pro reo.

A last topic that we need to touch upon with regard to AI is the future of work that we will share with artificial intelligences. Due to the rapid development in the area of artificial intelligence it is difficult to make reliable predictions as to where we are headed. Hence, it makes sense to consider Mark Knickrehm’s article in the Harvard Business Review and get acquainted with the five schools of thought into which he partitions our current tech-prophets:

  • Dystopians– they see a social-darwinistic fight for our very existence on the horizon. Increased levels of automatization of ever more complex processes through AIs leads to mass joblessness and worsens the societal rift between rich and poor.
  • Utopians – they hope for a brighter future in which strong AIs (those that are not distinguishable from humans in their abilities) make human labour almost superfluous. Humans will be able to pursue the activities and projects they are passionate about.
  • Technology Optimists – they are sure that there still is a lot of unused potential in this technology and that humankind will take a leap to a new level of living standards and productivity. To avoid social hardships as consequence of this leap and people being left behind, they consider investing in lifelong learning opportunities and training as necessary.
  • Productivity Sceptics – they doubt the economic potential and predict that overaging, climate change and great income disparities will slow down well-developed economies in  way that even AIs will not be able to aid their growth substantially.
  • Optimistic Realists – trust on the collaboration between human and artificial intelligence in achieving considerable growth of productivity in several sectors through which new jobs will be created. However, they demand an intensive, multilateral examination of the topic including society and politicians to avoid that the working middle class in particular will be harmed in this process.

No matter which school of thought seems the most reasonable to you, it is smart to watch this topic closely today – especially more closely than it is done by governments and politicians at the moment. AIs will do many fantastic things for us or enable us to do them ourselves. From early detection of diseases and prevention of suicides and the accurate prediction of natural disasters (combined with better organisation of disaster relief) to space exploration – areas of application and technological potential seem limitless.

However, although I consider myself to be the personification of technology enthusiasm, I want to end on a plea: “Errare humanum est, sed in errore perseverare diabolicum” as Seneca once wrote. To err is human, persisting in one’s error is devilish. The human nature of being wrong, our fallibility, has a value. It is the foundation for two virtues that artificial intelligences are lacking. Firstly, it provides us with clemency towards others. We know of their fallibility and given certain circumstances we are able to turn a blind eye to their mistakes. And secondly, knowledge of our fallibility makes us question our decisions constantly leading to new learning processes and insights. We are not obliged to follow an algorithmic absolutism. Only because of that we are – in contrast to our artificial intelligences – not doomed to persist in our errors. Doubt makes us unique and is fundament of our freedom. Ubi dubium, ibi libertas.

keepitliberal.de - die Woche!

Meldet euch für unseren Newsletter an, um keinen Artikel zu verpassen. Jeden Samstag gibt es ein Update mit den neuesten Artikeln, Insights und Hinweisen aus der Redaktion - direkt in euer E-Mail-Postfach. Der perfekte Wochenüberblick in drei Minuten Lesezeit.

Abonniere unseren Newsletter und schließe dich 292 anderen Abonnenten an.
Redakteur | Co-Founder