Browse results

You are looking at 41 - 50 of 6,708 items for

  • Upcoming Publications x
  • Just Published x
Clear All

Abstract

Rapid developments in the field of Artificial Intelligence prompt many questions and dilemmas; amongst others, they might force us to formulate definitive answers to controversial ethical questions. Future AGI systems will need to possess a certain degree of autonomy to yield the results we aim for, yet their decisions should preferably not be made arbitrarily – especially when it comes to moral dilemma scenarios. Instead, their actions should be guided by moral maxims we consider valid for epistemically justified reasons. The responsibility to equip AGI systems with an appropriate normative framework and certain premises that prevent them from being a threat to humankind is incredibly high. At the same time, our epistemic access to normative questions remains extremely limited. Since the very early days of philosophy, human beings have tried to figure out what they ought to do, and hence what the morally good that we should strive for could possibly be. Although philosophers have kept coming up with new moral theories, claiming to provide an answer or at least a generalizable approach, it appears that a broader consensus on the matter which is not highly culture-bound has never been reached. In this chapter, the compartmentalization of different branches of AI applications is examined as a possible approach to the aforementioned epistemic and normative issues. The core idea is to tackle epistemic and ethical questions before developing technologically powerful tools. Making use of the allegedly higher cognitive capabilities of AI systems in order to expand our epistemic access to the normative, we could eventually figure out how to guide future AGI systems safely, based on a normative position we carefully worked out.

In: Artificial Intelligence
Author: Carmen Krämer

Abstract

The progress towards a society in which robots are our daily attendants seems to be inevitable. Sharing our workplaces, our homes and public squares with robots calls for an exploration of how we want and need to organize our cohabitation with these increasingly autonomous machines. Not only the question of how robots should treat humans or the surrounding world, but also the questions of how humans should treat robots, and how robots should treat each other, may and should be asked. Considering the Kantian idea that possessing dignity is based on autonomy and the fact that robots are becoming increasingly autonomous and rational, one of these questions might be whether robots can have dignity. Two issues must therefore be addressed before answering the question: 1. What are robots and why should we think about “robot dignity” at all? and 2. What is dignity? The answer to the first question is necessary to understand the object of investigation and will be considered briefly. The second more complex question requires a short glimpse on the existing theories and the history of the term before a proposal will be given on how to understand dignity. Finally, it will be explained why robots cannot be rightly seen as possessors of dignity.

In: Artificial Intelligence
Author: Rebecca Davnall

Abstract

The self-driving car trolley problem has received undue focus in the automation ethics literature. It is unrealistic in two important ways: First, it fails to respect well-established truths of vehicle dynamics relating to tire friction, and second, it misrepresents the information environment that self-driving cars will respond to. Further, the problem induces readers to treat the car as an agent, thereby shielding the agency of car designers and operators from scrutiny. This problem is illustrated through an analysis of the reporting of the first pedestrian fatality caused by a self-driving car, in Tempe, Arizona on March 18th, 2018.

In: Artificial Intelligence
Author: Tobias Müller

Abstract

In recent years modern technology has transformed almost all areas of our lives, some of them fundamentally, through automation and technology. A new dimension of this technical progress can now be seen in “artificial intelligence”, that is, in computers whose algorithmic processing is known as “machine learning”, the “artificial” generation of knowledge from experience. Progress in AI research has led to the relevant literature already assigning many AI systems the ability to think, learn, predict, analyze, decide, know, and plan. Even autonomy and self-awareness are already attributed to some systems. In order to be able to answer the question of whether such attributions are justified, there is a considerable need for conceptual clarification of this point. In my paper I would like to present some considerations from the philosophy of mind and the philosophy of technology, which could be helpful to achieve a clarification of certain terms. For this purpose, various working definitions of “consciousness”, “thinking” and “intelligence” will be presented, whereby specificities of human consciousness and thinking will be highlighted. Subsequently, it will be discussed whether these specific qualities can already be found in today’s AI systems and whether there are fundamental limits of AI systems.

In: Artificial Intelligence
In: Artificial Intelligence

Abstract

Discussion of artificial intelligence (AI) is pervasive in current academia. However, it seems that from a philosophical point of view one of the most interesting questions is whether artificial intelligence could entail the existence of a conscious entity. In this paper, the first part provides a definition of artificial intelligence and distinguishes between instrumental artificial intelligence and artificial general intelligence (AGI). The second part analyses our motives for constructing AGI in addition to instrumental AI. The third part deals with arguments for and against the possibility of constructing AGI. The final part provides an argument for the conclusion that a being with AGI could indeed be conscious, and therefore should be considered ethically as an end-in-itself.

In: Artificial Intelligence
Author: Jan G. Michel

Abstract

The focus of this article is a question that has been neglected in debates about digitalization: Could machines replace human scientists? To provide an intelligible answer to it, we need to answer a further question: What is it that makes (or constitutes) a scientist? I offer an answer to this question by proposing a new demarcation criterion for science which I call “the discoverability criterion”. I proceed as follows: (1) I explain why the target question of this article is important, and (2) show that it leads to a variant of the demarcation problem of science. (3) By arguing that it is probably an essential feature of science that we can make scientific discoveries, I suggest a novel way of dealing with this problem by proposing a new demarcation criterion. Before introducing it, (4) I analyze an exemplary case of a real scientific discovery, and (5) argue that scientific discovery processes have a general underlying structure. (6) I introduce my discoverability criterion for science and present my master argument that helps us understand which criteria have to be fulfilled in order to decide whether machines can replace human scientists or not. (7) I conclude by answering the article’s target question and bringing forward a take-home message.

In: Artificial Intelligence
Author: Leonie Seng

Abstract

The attempt to find consistent moral values for certain societies has been part of descriptive as well as normative ethical considerations at least since Socrates’ philosophical investigations. In this sense I call the content of some current discussions on the possibility of moral decision-making ability in intelligent machines, robots or computer programs (in accordance with corresponding ethical criteria) old wine in new bottles – since such attempts deal with questions which have already been raised in early philosophical writings. The second main claim in this article is, that most approaches of ethics of Artificial Intelligence (AI) currently tend to talk about AI in general which can, as will be shown, likely lead to either misunderstandings or irrelevant statements which are neither useful for ethical debates nor for technical realizations.

In: Artificial Intelligence

Abstract

One of the main questions implied in what we today call “digitalization” is not what happens, when computers (or in our case: when robots) think but rather if it makes sense to talk of computers, robots, or any kind of machines as if they were capable of thinking. Or formulated in a still different way: Does it make sense to call machines “intelligent”? It goes without saying that the locus classicus of this question has been Alan Turing’s pathbreaking article on “Computing Machinery and Intelligence”, published 1950 in Mind  . What I will be dealing with in what follows will therefore just have the status of some philosophically orientated modest footnotes to the main idea of Alan Turing. These footnotes will be developed in five steps: Before even entering the arena of Robots and Artificial Intelligence I will try to open up a space for thorough reflection which will enable us to discuss these issues by not following the beaten track. In order to do that we will first of all ask whether “bullshit makes sense” by critically dealing with “the Digital”, a notion taken for granted by almost everybody (1), and by then asking the seemingly very pedestrian question as to how intelligent the CIA is (2). This then will set the scene for reminding us of a very influential argument in early modern sceptical rationalist philosophy: Descartes’ “Deus” or “Genius Malignus” argument (3). From there it will be easy to proceed to an important but often neglected implication of the “Turing test” (4) culminating in a revision and rehabilitation of one of the most abhorred concepts of modern science and philosophy: deception (5), thus launching what I would like to call an “anti-Cartesian experiment”.

In: Artificial Intelligence
In: Artificial Intelligence