Browse results

You are looking at 1 - 10 of 228 items for

  • Primary Language: English x
  • Primary Language: eng x
  • Search level: All x
Clear All
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands
In: How to Swim in Sinking Sands

Abstract

This paper discusses social and ethical challenges that arise through the application of artificial intelligence on the genetic analysis of humans. AI can be used to calculate individuals’ polygenic scores which statistically predict character traits and behavioral dispositions. This technology might foster the selection of unborn children according to the likelihood of engaging in socially compliant behavior or developing a certain level of cognitive abilities. Through exercising moral and cognitive enhancements on individuals and groups, AI-enhanced personal eugenics runs the risk of undermining or destroying essential characteristics of humanity such as moral responsibility and autonomy. It is shown that an argument for the rejection of this technology requires certain philosophical or theological assumptions concerning the essence or teleology of human nature.

In: Artificial Intelligence
Author: Daniel Neumann

Abstract

There are still many anthropomorphizing misunderstandings about what it means for a machine to be intelligent. In searching for common ground between human and artificial thinking processes, I suggest reconsidering how human intelligence can be conceived in mechanistic terms. To do this, I take a closer look at the notion of the spiritual automaton of the 17th century philosopher Baruch de Spinoza. In his writings, a mechanical theory of the mind can be found, before the comparison between minds and machines even arises. In a second step I sketch some ways in which current methods in AI studies could be seen to reflect aspects of the Spinozistic model of the mind

In: Artificial Intelligence

Abstract

Rapid developments in the field of Artificial Intelligence prompt many questions and dilemmas; amongst others, they might force us to formulate definitive answers to controversial ethical questions. Future AGI systems will need to possess a certain degree of autonomy to yield the results we aim for, yet their decisions should preferably not be made arbitrarily – especially when it comes to moral dilemma scenarios. Instead, their actions should be guided by moral maxims we consider valid for epistemically justified reasons. The responsibility to equip AGI systems with an appropriate normative framework and certain premises that prevent them from being a threat to humankind is incredibly high. At the same time, our epistemic access to normative questions remains extremely limited. Since the very early days of philosophy, human beings have tried to figure out what they ought to do, and hence what the morally good that we should strive for could possibly be. Although philosophers have kept coming up with new moral theories, claiming to provide an answer or at least a generalizable approach, it appears that a broader consensus on the matter which is not highly culture-bound has never been reached. In this chapter, the compartmentalization of different branches of AI applications is examined as a possible approach to the aforementioned epistemic and normative issues. The core idea is to tackle epistemic and ethical questions before developing technologically powerful tools. Making use of the allegedly higher cognitive capabilities of AI systems in order to expand our epistemic access to the normative, we could eventually figure out how to guide future AGI systems safely, based on a normative position we carefully worked out.

In: Artificial Intelligence