This book discusses major issues of the current AI debate from the perspectives of philosophy, theology, and the social sciences: Can AI have a consciousness? Is superintelligence possible and probable? How does AI change individual and social life? Can there be artificial persons? What influence does AI have on religious worldviews? In Western societies, we are surrounded by artificially intelligent systems. Most of these systems are embedded in online platforms. But embodiments of AI, be it by voice or by actual physical embodiment, give artificially intelligent systems another dimension in terms of their impact on how we perceive these systems, how they shape our communication with them and with fellow humans and how we live and work together. AI in any form gives a new twist to the big questions that humanity has concerned herself with for centuries: What is consciousness? How should we treat each other - what is right and what is wrong? How do our creations change the world we are living in? Which challenges do we have to face in the future?
This paper discusses social and ethical challenges that arise through the application of artificial intelligence on the genetic analysis of humans. AI can be used to calculate individuals’ polygenic scores which statistically predict character traits and behavioral dispositions. This technology might foster the selection of unborn children according to the likelihood of engaging in socially compliant behavior or developing a certain level of cognitive abilities. Through exercising moral and cognitive enhancements on individuals and groups, AI-enhanced personal eugenics runs the risk of undermining or destroying essential characteristics of humanity such as moral responsibility and autonomy. It is shown that an argument for the rejection of this technology requires certain philosophical or theological assumptions concerning the essence or teleology of human nature.
There are still many anthropomorphizing misunderstandings about what it means for a machine to be intelligent. In searching for common ground between human and artificial thinking processes, I suggest reconsidering how human intelligence can be conceived in mechanistic terms. To do this, I take a closer look at the notion of the spiritual automaton of the 17th century philosopher Baruch de Spinoza. In his writings, a mechanical theory of the mind can be found, before the comparison between minds and machines even arises. In a second step I sketch some ways in which current methods in AI studies could be seen to reflect aspects of the Spinozistic model of the mind
Rapid developments in the field of Artificial Intelligence prompt many questions and dilemmas; amongst others, they might force us to formulate definitive answers to controversial ethical questions. Future AGI systems will need to possess a certain degree of autonomy to yield the results we aim for, yet their decisions should preferably not be made arbitrarily – especially when it comes to moral dilemma scenarios. Instead, their actions should be guided by moral maxims we consider valid for epistemically justified reasons. The responsibility to equip AGI systems with an appropriate normative framework and certain premises that prevent them from being a threat to humankind is incredibly high. At the same time, our epistemic access to normative questions remains extremely limited. Since the very early days of philosophy, human beings have tried to figure out what they ought to do, and hence what the morally good that we should strive for could possibly be. Although philosophers have kept coming up with new moral theories, claiming to provide an answer or at least a generalizable approach, it appears that a broader consensus on the matter which is not highly culture-bound has never been reached. In this chapter, the compartmentalization of different branches of AI applications is examined as a possible approach to the aforementioned epistemic and normative issues. The core idea is to tackle epistemic and ethical questions before developing technologically powerful tools. Making use of the allegedly higher cognitive capabilities of AI systems in order to expand our epistemic access to the normative, we could eventually figure out how to guide future AGI systems safely, based on a normative position we carefully worked out.
The progress towards a society in which robots are our daily attendants seems to be inevitable. Sharing our workplaces, our homes and public squares with robots calls for an exploration of how we want and need to organize our cohabitation with these increasingly autonomous machines. Not only the question of how robots should treat humans or the surrounding world, but also the questions of how humans should treat robots, and how robots should treat each other, may and should be asked. Considering the Kantian idea that possessing dignity is based on autonomy and the fact that robots are becoming increasingly autonomous and rational, one of these questions might be whether robots can have dignity. Two issues must therefore be addressed before answering the question: 1. What are robots and why should we think about “robot dignity” at all? and 2. What is dignity? The answer to the first question is necessary to understand the object of investigation and will be considered briefly. The second more complex question requires a short glimpse on the existing theories and the history of the term before a proposal will be given on how to understand dignity. Finally, it will be explained why robots cannot be rightly seen as possessors of dignity.
The self-driving car trolley problem has received undue focus in the automation ethics literature. It is unrealistic in two important ways: First, it fails to respect well-established truths of vehicle dynamics relating to tire friction, and second, it misrepresents the information environment that self-driving cars will respond to. Further, the problem induces readers to treat the car as an agent, thereby shielding the agency of car designers and operators from scrutiny. This problem is illustrated through an analysis of the reporting of the first pedestrian fatality caused by a self-driving car, in Tempe, Arizona on March 18th, 2018.
In recent years modern technology has transformed almost all areas of our lives, some of them fundamentally, through automation and technology. A new dimension of this technical progress can now be seen in “artificial intelligence”, that is, in computers whose algorithmic processing is known as “machine learning”, the “artificial” generation of knowledge from experience. Progress in AI research has led to the relevant literature already assigning many AI systems the ability to think, learn, predict, analyze, decide, know, and plan. Even autonomy and self-awareness are already attributed to some systems. In order to be able to answer the question of whether such attributions are justified, there is a considerable need for conceptual clarification of this point. In my paper I would like to present some considerations from the philosophy of mind and the philosophy of technology, which could be helpful to achieve a clarification of certain terms. For this purpose, various working definitions of “consciousness”, “thinking” and “intelligence” will be presented, whereby specificities of human consciousness and thinking will be highlighted. Subsequently, it will be discussed whether these specific qualities can already be found in today’s AI systems and whether there are fundamental limits of AI systems.
Discussion of artificial intelligence (AI) is pervasive in current academia. However, it seems that from a philosophical point of view one of the most interesting questions is whether artificial intelligence could entail the existence of a conscious entity. In this paper, the first part provides a definition of artificial intelligence and distinguishes between instrumental artificial intelligence and artificial general intelligence (AGI). The second part analyses our motives for constructing AGI in addition to instrumental AI. The third part deals with arguments for and against the possibility of constructing AGI. The final part provides an argument for the conclusion that a being with AGI could indeed be conscious, and therefore should be considered ethically as an end-in-itself.
The focus of this article is a question that has been neglected in debates about digitalization: Could machines replace human scientists? To provide an intelligible answer to it, we need to answer a further question: What is it that makes (or constitutes) a scientist? I offer an answer to this question by proposing a new demarcation criterion for science which I call “the discoverability criterion”. I proceed as follows: (1) I explain why the target question of this article is important, and (2) show that it leads to a variant of the demarcation problem of science. (3) By arguing that it is probably an essential feature of science that we can make scientific discoveries, I suggest a novel way of dealing with this problem by proposing a new demarcation criterion. Before introducing it, (4) I analyze an exemplary case of a real scientific discovery, and (5) argue that scientific discovery processes have a general underlying structure. (6) I introduce my discoverability criterion for science and present my master argument that helps us understand which criteria have to be fulfilled in order to decide whether machines can replace human scientists or not. (7) I conclude by answering the article’s target question and bringing forward a take-home message.