The focus of this article is a question that has been neglected in debates about digitalization: Could machines replace human scientists? To provide an intelligible answer to it, we need to answer a further question: What is it that makes (or constitutes) a scientist? I offer an answer to this question by proposing a new demarcation criterion for science which I call “the discoverability criterion”. I proceed as follows: (1) I explain why the target question of this article is important, and (2) show that it leads to a variant of the demarcation problem of science. (3) By arguing that it is probably an essential feature of science that we can make scientific discoveries, I suggest a novel way of dealing with this problem by proposing a new demarcation criterion. Before introducing it, (4) I analyze an exemplary case of a real scientific discovery, and (5) argue that scientific discovery processes have a general underlying structure. (6) I introduce my discoverability criterion for science and present my master argument that helps us understand which criteria have to be fulfilled in order to decide whether machines can replace human scientists or not. (7) I conclude by answering the article’s target question and bringing forward a take-home message.
The attempt to find consistent moral values for certain societies has been part of descriptive as well as normative ethical considerations at least since Socrates’ philosophical investigations. In this sense I call the content of some current discussions on the possibility of moral decision-making ability in intelligent machines, robots or computer programs (in accordance with corresponding ethical criteria) old wine in new bottles – since such attempts deal with questions which have already been raised in early philosophical writings. The second main claim in this article is, that most approaches of ethics of Artificial Intelligence (AI) currently tend to talk about AI in general which can, as will be shown, likely lead to either misunderstandings or irrelevant statements which are neither useful for ethical debates nor for technical realizations.
One of the main questions implied in what we today call “digitalization” is not what happens, when computers (or in our case: when robots) think but rather if it makes sense to talk of computers, robots, or any kind of machines as if they were capable of thinking. Or formulated in a still different way: Does it make sense to call machines “intelligent”? It goes without saying that the locus classicus of this question has been Alan Turing’s pathbreaking article on “Computing Machinery and Intelligence”, published 1950 in Mind
. What I will be dealing with in what follows will therefore just have the status of some philosophically orientated modest footnotes to the main idea of Alan Turing. These footnotes will be developed in five steps: Before even entering the arena of Robots and Artificial Intelligence I will try to open up a space for thorough reflection which will enable us to discuss these issues by not following the beaten track. In order to do that we will first of all ask whether “bullshit makes sense” by critically dealing with “the Digital”, a notion taken for granted by almost everybody (1), and by then asking the seemingly very pedestrian question as to how intelligent the CIA is (2). This then will set the scene for reminding us of a very influential argument in early modern sceptical rationalist philosophy: Descartes’ “Deus” or “Genius Malignus” argument (3). From there it will be easy to proceed to an important but often neglected implication of the “Turing test” (4) culminating in a revision and rehabilitation of one of the most abhorred concepts of modern science and philosophy: deception (5), thus launching what I would like to call an “anti-Cartesian experiment”.
Robots are entering all sorts of social spaces including areas such as religion and spirituality. Social robots in particular do not only function as a new medium for communication but are perceived as autonomous social counterparts. Using social robots in religious settings raises questions that cross disciplinary boundaries. Questions from human-robot interaction (HRI) ask about how to design the robot user experience. Theological questions touch existential aspects and ask whether robots can obtain spiritual competence and how they might fundamentally change previously unautomated religious practice. So far, work done by HRI researchers and theologians remains largely siloed within both communities. In this work, we attempt to interweave disciplinary views on social robotics in religious communication on the example of the blessing robot BlessU-2. We discuss a discursive design study in which the robot expressed Protestant blessings to thousands of visitors of a public exhibition. More than 2,000 people wrote comments on their encounters with the robot. From these comments we analyzed both experiential aspects such as the acceptability and design features of this social robot as well as how the participants reflected on important existential questions such as the origin of a blessing. We conclude that a probing approach and the subsequent collaborative analysis of empirical data was fruitful for both Protestant Theology and HRI disciplines. Central challenges of social robots could be addressed more holistically from the perspective of individual user experience up to the theological and ethical reflection of individual and social existential needs.
Theological research has sought to conceptualise AI as an image of humankind in accordance with the biblical idea that the latter was created by God in God’s image. This article argues, however, that AI is more adequately understood as a dangerous servant in the process of dividing human society into those who benefit from it and those who suffer the consequences. But we can envision a possible alternative in the spirit of the biblical creation narrative: AI may fulfil technology by assuming a human face and becoming humankind’s double in an epistemic partnership.
The impending introduction of self-driving cars poses a new stage of complexity not only in technical requirements but in the ethical challenges it evokes. The question of which ethical principles to use for the programming of crash algorithms, especially in response to so-called dilemma situations, is one of the most controversial moral issues discussed. This paper critically investigates the rationale behind rule utilitarianism as to whether and how it might be adequate to guide ethical behaviour of autonomous cars in driving dilemmas. Three core aspects related to the rule utilitarian concept are discussed with regards to their relevance for the given context: the universalization principle, the ambivalence of compliance issues, and the demandingness objection. It is concluded that a rule utilitarian approach might be useful for solving driverless car dilemmas only to a limited extent. In particular, it cannot provide the exclusive ethical criterion when evaluated from a practical point of view. However, it might still be of conceptual value in the context of a pluralist solution.
In recent years, creative systems capable of producing new and innovative solutions in fields such as quantum physics, fine arts, robotics or defense and security have emerged. These systems, called machine invention systems, have the potential to revolutionize the current standard invention process. Because of their potential, there are widespread implications to consider, on a societal and organizational level alike: changes in the workforce structure, intellectual property rights and ethical concerns regarding such autonomous systems. On the organizational side, integration of such machine systems in the largely human-driven innovation process requires careful consideration. Delegation of decisions to human agents, technology assessments, and concepts like the complementarity approach that is used to strengthen the interplay between human and machine in innovation processes seem to be a few of the solutions needed to address the changes incurred by such machine invention systems. Future work in this field will address the integration of machine-based invention from a different perspective as well, focusing on establishing necessary characteristics needed in order to be more easily accepted by human users and customers.
According to computationalist theories of the mind, consciousness does not depend on any specific physical substrate, such as carbon-based biological material, but automatically arises out of the right kind of computational structure. Even though this thesis has become an unquestioned assumption in most of current AI literature, there exist only few direct arguments in favor of it. One, and probably the most prominent, argument for computationalism from the philosophy of mind is David Chalmers’ dancing-qualia argument. The aim of this paper is to challenge this argument from a hitherto neglected angle, by arguing that it is undermined by some experimental results in neurobiology regarding the workings of phenomenal memory. However, I will argue that Chalmers’ overall case for the possibility of conscious AI can still be vindicated.