Browse results

Abstract

The impending introduction of self-driving cars poses a new stage of complexity not only in technical requirements but in the ethical challenges it evokes. The question of which ethical principles to use for the programming of crash algorithms, especially in response to so-called dilemma situations, is one of the most controversial moral issues discussed. This paper critically investigates the rationale behind rule utilitarianism as to whether and how it might be adequate to guide ethical behaviour of autonomous cars in driving dilemmas. Three core aspects related to the rule utilitarian concept are discussed with regards to their relevance for the given context: the universalization principle, the ambivalence of compliance issues, and the demandingness objection. It is concluded that a rule utilitarian approach might be useful for solving driverless car dilemmas only to a limited extent. In particular, it cannot provide the exclusive ethical criterion when evaluated from a practical point of view. However, it might still be of conceptual value in the context of a pluralist solution.

In: Artificial Intelligence

Abstract

In recent years, creative systems capable of producing new and innovative solutions in fields such as quantum physics, fine arts, robotics or defense and security have emerged. These systems, called machine invention systems, have the potential to revolutionize the current standard invention process. Because of their potential, there are widespread implications to consider, on a societal and organizational level alike: changes in the workforce structure, intellectual property rights and ethical concerns regarding such autonomous systems. On the organizational side, integration of such machine systems in the largely human-driven innovation process requires careful consideration. Delegation of decisions to human agents, technology assessments, and concepts like the complementarity approach that is used to strengthen the interplay between human and machine in innovation processes seem to be a few of the solutions needed to address the changes incurred by such machine invention systems. Future work in this field will address the integration of machine-based invention from a different perspective as well, focusing on establishing necessary characteristics needed in order to be more easily accepted by human users and customers.

In: Artificial Intelligence
Author: Stefan Reining

Abstract

According to computationalist theories of the mind, consciousness does not depend on any specific physical substrate, such as carbon-based biological material, but automatically arises out of the right kind of computational structure. Even though this thesis has become an unquestioned assumption in most of current AI literature, there exist only few direct arguments in favor of it. One, and probably the most prominent, argument for computationalism from the philosophy of mind is David Chalmers’ dancing-qualia argument. The aim of this paper is to challenge this argument from a hitherto neglected angle, by arguing that it is undermined by some experimental results in neurobiology regarding the workings of phenomenal memory. However, I will argue that Chalmers’ overall case for the possibility of conscious AI can still be vindicated.

In: Artificial Intelligence
Author: David J. Gunkel

Abstract

In a recent proposal issued by the European Parliament it was suggested that robots and artificial intelligence might need to be considered “electronic persons” for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with considerable resistance. Underlying the controversy, however, is an important philosophical question: Under what conditions would it be necessary for robots, AI, or other socially interactive, autonomous systems to have some claim to moral and legal standing? Under what conditions would a technological artifact need to be considered more than a mere instrument of human action and have some legitimate claim to independent social status? Or to put it more directly: Can or should robots ever have anything like rights? This essay takes up and investigates these questions. It reviews and critiques current thinking (or lack of thinking) about this subject matter, maps the terrain of the set of available answers that have been provided in the existing literature, and develops an alternative way of responding to and taking responsibility for the opportunities and challenges that we now confront in the face or the faceplate of increasingly social and interactive robots.

In: Artificial Intelligence
In: Artificial Intelligence

Abstract

Now that robots are leaving their cages in factory shop floors and laboratories, they are confronted with human everyday worlds. With this transfer from being exclusively concerned with technical systems to building socio-technical systems, everyday worlds become a wicked problem of robotics: They are interpretative, highly context dependent and products of constant interactive negotiation. The key to understanding and modelling this wicked problem is acknowledging the complexity of a human interaction. We stress three basic factors, which are constitutive for this complexity: indexicality, reciprocity of expectations, and double contingency. Although it is in the nature of these factors that they cannot be formalized, roboticists are forced to translate them into complicated, rather than complex, formalizations.

In: Artificial Intelligence
In: Artificial Intelligence
In: Artificial Intelligence
Author: Scarlet Siebert

Abstract

This contribution aims at critically discussing different social roles of robots that interact with children for educational purposes. For this, the roles of a teacher, a peer, and a novice will be analyzed on the basis of a literature review describing the functions and characteristics of these roles. Didactic concepts explaining how learners acquire knowledge and skills under different instruction styles and concepts of developmental psychology explaining how children perceive social robots will serve as criteria for the assessment of the adequacy of these different roles.

In: Artificial Intelligence

Abstract

Mental activities are a fascinating mystery that humans have tried to unveil since the very beginning of philosophy. We all try to understand how other people “tick” and formulate hypotheses, predictions, expectations and, more broadly, representations of the others’ goals, desires and intentions, and behaviors following from those. We “think” spontaneously about others’ and our own mental states. The advent of new technologies – seemingly smart artificial agents – is giving researchers new environments to test mindreading models, pushing the cognitive flexibility of the human social brain from the natural domain to towards the artificial.

In: Artificial Intelligence