According to computationalist theories of the mind, consciousness does not depend on any specific physical substrate, such as carbon-based biological material, but automatically arises out of the right kind of computational structure. Even though this thesis has become an unquestioned assumption in most of current AI literature, there exist only few direct arguments in favor of it. One, and probably the most prominent, argument for computationalism from the philosophy of mind is David Chalmers’ dancing-qualia argument. The aim of this paper is to challenge this argument from a hitherto neglected angle, by arguing that it is undermined by some experimental results in neurobiology regarding the workings of phenomenal memory. However, I will argue that Chalmers’ overall case for the possibility of conscious AI can still be vindicated.
In a recent proposal issued by the European Parliament it was suggested that robots and artificial intelligence might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with considerable resistance. Underlying the controversy, however, is an important philosophical question: Under what conditions would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? Under what conditions would a technological artifact need to be considered more than a mere instrument of human action and have some legitimate claim to independent social status? Or to put it more directly: Can or should robots ever have anything like rights? This essay takes up and investigates these questions. It reviews and critiques current thinking (or lack of thinking) about this subject matter, maps the terrain of the set of available answers that have been provided in the existing literature, and develops an alternative way of responding to and taking responsibility for the opportunities and challenges that we now confront in the face or the faceplate of increasingly social and interactive robots.
Now that robots are leaving their cages in factory shop floors and laboratories, they are confronted with human everyday worlds. With this transfer from being exclusively concerned with technical systems to building socio-technical systems, everyday worlds become a wicked problem of robotics: They are interpretative, highly context dependent and products of constant interactive negotiation. The key to understanding and modelling this wicked problem is acknowledging the complexity of a human interaction. We stress three basic factors, which are constitutive for this complexity: indexicality, reciprocity of expectations, and double contingency. Although it is in the nature of these factors that they cannot be formalized, roboticists are forced to translate them into complicated, rather than complex, formalizations.
This contribution aims at critically discussing different social roles of robots that interact with children for educational purposes. For this, the roles of a teacher, a peer, and a novice will be analyzed on the basis of a literature review describing the functions and characteristics of these roles. Didactic concepts explaining how learners acquire knowledge and skills under different instruction styles and concepts of developmental psychology explaining how children perceive social robots will serve as criteria for the assessment of the adequacy of these different roles.
Mental activities are a fascinating mystery that humans have tried to unveil since the very beginning of philosophy. We all try to understand how other people “tick” and formulate hypotheses, predictions, expectations and, more broadly, representations of the others’ goals, desires and intentions, and behaviors following from those. We “think” spontaneously about others’ and our own mental states. The advent of new technologies – seemingly smart artificial agents – is giving researchers new environments to test mindreading models, pushing the cognitive flexibility of the human social brain from the natural domain to towards the artificial.
Unlike other human-made objects, the ability for intelligent systems to exhibit agency, and even appear anthropomorphic, leads to moral confusion about their status in society. As Himma states “If something walks, talks, and behaves enough like me, I might not be justified in thinking that it has a mind, but I surely have an obligation, if our ordinary reactions regarding other people are correct, to treat them as if they are moral agents.” Here, I present an evaluation of the requirements for moral agency and moral patiency. I examine human morality through a presentation of a high-level ontology of the human action-selection system. Then, drawing parallels between natural and artificial intelligence, I discuss the limitations and bottlenecks of intelligence, demonstrating how an ‘all-powerful’ Artificial General Intelligence would not only entail omniscience, but also be impossible. I demonstrate throughout this Chapter how culture determines the moral status of all entities, as morality and law are human-made ‘fictions’ that help us guide our actions. This means that our moral spectrum can be altered to include machines. However, there are both descriptive and normative arguments for why such a move is not only avoidable, but also should be avoided.
The question whether automata can be intelligent or even have a mind has been an ongoing controversy for centuries. Even though we intuitively agree with the positions of Descartes and Searle that automata do not have a mind nor understand what they are doing the way we do, we do nevertheless observe the increasing occurrence of automata performing several tasks that are considered to require mental abilities (e.g. AlphaZero, Google Duplex, and Project Debater). The article outlines the different positions in the controversy and argues for the thesis that, based on recent advances in artificial intelligence, automata could in principle perform other complex tasks, and in particular moral decision-making as well.