In recent years modern technology has transformed almost all areas of our lives, some of them fundamentally, through automation and technology. A new dimension of this technical progress can now be seen in “artificial intelligence”, that is, in computers whose algorithmic processing is known as “machine learning”, the “artificial” generation of knowledge from experience. Progress in AI research has led to the relevant literature already assigning many AI systems the ability to think, learn, predict, analyze, decide, know, and plan. Even autonomy and self-awareness are already attributed to some systems. In order to be able to answer the question of whether such attributions are justified, there is a considerable need for conceptual clarification of this point. In my paper I would like to present some considerations from the philosophy of mind and the philosophy of technology, which could be helpful to achieve a clarification of certain terms. For this purpose, various working definitions of “consciousness”, “thinking” and “intelligence” will be presented, whereby specificities of human consciousness and thinking will be highlighted. Subsequently, it will be discussed whether these specific qualities can already be found in today’s AI systems and whether there are fundamental limits of AI systems.
Discussion of artificial intelligence (AI) is pervasive in current academia. However, it seems that from a philosophical point of view one of the most interesting questions is whether artificial intelligence could entail the existence of a conscious entity. In this paper, the first part provides a definition of artificial intelligence and distinguishes between instrumental artificial intelligence and artificial general intelligence (AGI). The second part analyses our motives for constructing AGI in addition to instrumental AI. The third part deals with arguments for and against the possibility of constructing AGI. The final part provides an argument for the conclusion that a being with AGI could indeed be conscious, and therefore should be considered ethically as an end-in-itself.
The focus of this article is a question that has been neglected in debates about digitalization: Could machines replace human scientists? To provide an intelligible answer to it, we need to answer a further question: What is it that makes (or constitutes) a scientist? I offer an answer to this question by proposing a new demarcation criterion for science which I call “the discoverability criterion”. I proceed as follows: (1) I explain why the target question of this article is important, and (2) show that it leads to a variant of the demarcation problem of science. (3) By arguing that it is probably an essential feature of science that we can make scientific discoveries, I suggest a novel way of dealing with this problem by proposing a new demarcation criterion. Before introducing it, (4) I analyze an exemplary case of a real scientific discovery, and (5) argue that scientific discovery processes have a general underlying structure. (6) I introduce my discoverability criterion for science and present my master argument that helps us understand which criteria have to be fulfilled in order to decide whether machines can replace human scientists or not. (7) I conclude by answering the article’s target question and bringing forward a take-home message.
The attempt to find consistent moral values for certain societies has been part of descriptive as well as normative ethical considerations at least since Socrates’ philosophical investigations. In this sense I call the content of some current discussions on the possibility of moral decision-making ability in intelligent machines, robots or computer programs (in accordance with corresponding ethical criteria) old wine in new bottles – since such attempts deal with questions which have already been raised in early philosophical writings. The second main claim in this article is, that most approaches of ethics of Artificial Intelligence (AI) currently tend to talk about AI in general which can, as will be shown, likely lead to either misunderstandings or irrelevant statements which are neither useful for ethical debates nor for technical realizations.
One of the main questions implied in what we today call “digitalization” is not what happens, when computers (or in our case: when robots) think but rather if it makes sense to talk of computers, robots, or any kind of machines as if they were capable of thinking. Or formulated in a still different way: Does it make sense to call machines “intelligent”? It goes without saying that the locus classicus of this question has been Alan Turing’s pathbreaking article on “Computing Machinery and Intelligence”, published 1950 in Mind
. What I will be dealing with in what follows will therefore just have the status of some philosophically orientated modest footnotes to the main idea of Alan Turing. These footnotes will be developed in five steps: Before even entering the arena of Robots and Artificial Intelligence I will try to open up a space for thorough reflection which will enable us to discuss these issues by not following the beaten track. In order to do that we will first of all ask whether “bullshit makes sense” by critically dealing with “the Digital”, a notion taken for granted by almost everybody (1), and by then asking the seemingly very pedestrian question as to how intelligent the CIA is (2). This then will set the scene for reminding us of a very influential argument in early modern sceptical rationalist philosophy: Descartes’ “Deus” or “Genius Malignus” argument (3). From there it will be easy to proceed to an important but often neglected implication of the “Turing test” (4) culminating in a revision and rehabilitation of one of the most abhorred concepts of modern science and philosophy: deception (5), thus launching what I would like to call an “anti-Cartesian experiment”.
Robots are entering all sorts of social spaces including areas such as religion and spirituality. Social robots in particular do not only function as a new medium for communication but are perceived as autonomous social counterparts. Using social robots in religious settings raises questions that cross disciplinary boundaries. Questions from human-robot interaction (HRI) ask about how to design the robot user experience. Theological questions touch existential aspects and ask whether robots can obtain spiritual competence and how they might fundamentally change previously unautomated religious practice. So far, work done by HRI researchers and theologians remains largely siloed within both communities. In this work, we attempt to interweave disciplinary views on social robotics in religious communication on the example of the blessing robot BlessU-2. We discuss a discursive design study in which the robot expressed Protestant blessings to thousands of visitors of a public exhibition. More than 2,000 people wrote comments on their encounters with the robot. From these comments we analyzed both experiential aspects such as the acceptability and design features of this social robot as well as how the participants reflected on important existential questions such as the origin of a blessing. We conclude that a probing approach and the subsequent collaborative analysis of empirical data was fruitful for both Protestant Theology and HRI disciplines. Central challenges of social robots could be addressed more holistically from the perspective of individual user experience up to the theological and ethical reflection of individual and social existential needs.
Theological research has sought to conceptualise AI as an image of humankind in accordance with the biblical idea that the latter was created by God in God’s image. This article argues, however, that AI is more adequately understood as a dangerous servant in the process of dividing human society into those who benefit from it and those who suffer the consequences. But we can envision a possible alternative in the spirit of the biblical creation narrative: AI may fulfil technology by assuming a human face and becoming humankind’s double in an epistemic partnership.