In recent years, creative systems capable of producing new and innovative solutions in fields such as quantum physics, fine arts, robotics or defense and security have emerged. These systems, called machine invention systems, have the potential to revolutionize the current standard invention process. Because of their potential, there are widespread implications to consider, on a societal and organizational level alike: changes in the workforce structure, intellectual property rights and ethical concerns regarding such autonomous systems. On the organizational side, integration of such machine systems in the largely human-driven innovation process requires careful consideration. Delegation of decisions to human agents, technology assessments, and concepts like the complementarity approach that is used to strengthen the interplay between human and machine in innovation processes seem to be a few of the solutions needed to address the changes incurred by such machine invention systems. Future work in this field will address the integration of machine-based invention from a different perspective as well, focusing on establishing necessary characteristics needed in order to be more easily accepted by human users and customers.
According to computationalist theories of the mind, consciousness does not depend on any specific physical substrate, such as carbon-based biological material, but automatically arises out of the right kind of computational structure. Even though this thesis has become an unquestioned assumption in most of current AI literature, there exist only few direct arguments in favor of it. One, and probably the most prominent, argument for computationalism from the philosophy of mind is David Chalmers’ dancing-qualia argument. The aim of this paper is to challenge this argument from a hitherto neglected angle, by arguing that it is undermined by some experimental results in neurobiology regarding the workings of phenomenal memory. However, I will argue that Chalmers’ overall case for the possibility of conscious AI can still be vindicated.
In a recent proposal issued by the European Parliament it was suggested that robots and artificial intelligence might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with considerable resistance. Underlying the controversy, however, is an important philosophical question: Under what conditions would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? Under what conditions would a technological artifact need to be considered more than a mere instrument of human action and have some legitimate claim to independent social status? Or to put it more directly: Can or should robots ever have anything like rights? This essay takes up and investigates these questions. It reviews and critiques current thinking (or lack of thinking) about this subject matter, maps the terrain of the set of available answers that have been provided in the existing literature, and develops an alternative way of responding to and taking responsibility for the opportunities and challenges that we now confront in the face or the faceplate of increasingly social and interactive robots.
Now that robots are leaving their cages in factory shop floors and laboratories, they are confronted with human everyday worlds. With this transfer from being exclusively concerned with technical systems to building socio-technical systems, everyday worlds become a wicked problem of robotics: They are interpretative, highly context dependent and products of constant interactive negotiation. The key to understanding and modelling this wicked problem is acknowledging the complexity of a human interaction. We stress three basic factors, which are constitutive for this complexity: indexicality, reciprocity of expectations, and double contingency. Although it is in the nature of these factors that they cannot be formalized, roboticists are forced to translate them into complicated, rather than complex, formalizations.
This contribution aims at critically discussing different social roles of robots that interact with children for educational purposes. For this, the roles of a teacher, a peer, and a novice will be analyzed on the basis of a literature review describing the functions and characteristics of these roles. Didactic concepts explaining how learners acquire knowledge and skills under different instruction styles and concepts of developmental psychology explaining how children perceive social robots will serve as criteria for the assessment of the adequacy of these different roles.
Mental activities are a fascinating mystery that humans have tried to unveil since the very beginning of philosophy. We all try to understand how other people “tick” and formulate hypotheses, predictions, expectations and, more broadly, representations of the others’ goals, desires and intentions, and behaviors following from those. We “think” spontaneously about others’ and our own mental states. The advent of new technologies – seemingly smart artificial agents – is giving researchers new environments to test mindreading models, pushing the cognitive flexibility of the human social brain from the natural domain to towards the artificial.
Unlike other human-made objects, the ability for intelligent systems to exhibit agency, and even appear anthropomorphic, leads to moral confusion about their status in society. As Himma states “If something walks, talks, and behaves enough like me, I might not be justified in thinking that it has a mind, but I surely have an obligation, if our ordinary reactions regarding other people are correct, to treat them as if they are moral agents.” Here, I present an evaluation of the requirements for moral agency and moral patiency. I examine human morality through a presentation of a high-level ontology of the human action-selection system. Then, drawing parallels between natural and artificial intelligence, I discuss the limitations and bottlenecks of intelligence, demonstrating how an ‘all-powerful’ Artificial General Intelligence would not only entail omniscience, but also be impossible. I demonstrate throughout this Chapter how culture determines the moral status of all entities, as morality and law are human-made ‘fictions’ that help us guide our actions. This means that our moral spectrum can be altered to include machines. However, there are both descriptive and normative arguments for why such a move is not only avoidable, but also should be avoided.