Aristotle tells us that the Nicomachean Ethics is an “inquiry” and an “investigation” (methodos and zētēsis). This paper focuses on an under-appreciated way that the work is investigative: its employment of an exploratory investigative strategy—that is, its frequent positing of, and later revision or even rejection of, merely preliminary positions. Though this may seem like a small point, this aspect of the work’s methodology has important consequences for how we should read it—specifically, we should be open to the possibility that some contradictions in the text are the result of his employment of this investigative strategy. In the paper, I describe this investigative strategy, discuss what motivates Aristotle to employ it in the work, and go through three contradictions that are plausibly identified as examples of its use—specifically, his claims that courageous people do and do not fear death, that friendship is and is not mutually recognized goodwill, and that virtuous people do and do not choose noble actions for their own sake.
What is the relation, in Plato, between the account of knowledge and the account of inquiry? Is the account of knowledge independent of the account of inquiry? These strike me as important, even pressing, questions. While so much work has been done on Plato’s account of knowledge, and quite a lot is being done on his account of inquiry, I know of only the odd critic who has considered the two together. It is remarkable that critics have generally treated of the two topics—Plato’s account of knowledge and his account of inquiry—as if they were separate. This suggests critics have been tacitly supposing that, for Plato, the account of knowledge is independent of the account of inquiry. In this paper, I pose these questions, and take them up for investigation. I argue that Plato’s account of knowledge is not independent of his account of inquiry; on the contrary, Plato’s account of knowledge cannot be understood if separated from his account of inquiry. I do so, in this paper, with reference to the Phaedo exclusively.
This article treats of whether scepticism, in particular Pyrrhonian scepticism, can be said to deploy a method of any kind. I begin by distinguishing various different notions of method, and their relations to the concept of expertise (section 1). I then (section 2) consider Sextus’s account, in the prologue to Outlines of Pyrrhonism, of the Pyrrhonist approach, and how it supposedly differs from those of other groups, sceptical and otherwise. In particular, I consider the central claim that the Pyrrhonist is a continuing investigator (section 3), who in spite of refusing to be satisfied with any answer (or none), none the less still achieves tranquillity, and whether this can avoid being presented as a method for so doing, and hence as compromising the purity of sceptical suspension of commitment (section 4). In doing so, I relate—and contrast—the Pyrrhonists’ account of their practice to the ‘Socratic Method’ (section 5), as well as to the argumentative practice of various Academics (section 6), and assess their claim in so doing to be offering a way of instruction (section 7). I conclude (section 8) that there is a consistent and interesting sense in which Pyrrhonian scepticism can be absolved of the charge that it incoherently, and crypto-dogmatically, presents itself as offering a method for achieving an intrinsically desirable goal.
I begin this paper with a puzzle: why is Plato’s Parmenides replete with references to Gorgias? While the Eleatic heritage and themes in the dialogue are clear, it is less clear what the point would be of alluding to a well-known sophist. I suggest that the answer has to do with the similarities in the underlying methods employed by both Plato and Gorgias. These similarities, as well as Plato’s recognition of them, suggest that he owes a more significant philosophical and methodological debt to sophists like Gorgias than is often assumed. Further evidence from Plato and Xenophon suggest that Socrates used this very same method, which I call ‘exploring both sides’. I distinguish this Socratic method and its sophistic counterpart in terms of structure, internal aim, and external aim. Doing so allows for a more nuanced understanding of their similarities and differences. It also challenges the outsized role that popular caricatures of philosophical and sophistic method have had on our understanding of their relationship.
This book discusses major issues of the current AI debate from the perspectives of philosophy, theology, and the social sciences: Can AI have a consciousness? Is superintelligence possible and probable? How does AI change individual and social life? Can there be artificial persons? What influence does AI have on religious worldviews? In Western societies, we are surrounded by artificially intelligent systems. Most of these systems are embedded in online platforms. But embodiments of AI, be it by voice or by actual physical embodiment, give artificially intelligent systems another dimension in terms of their impact on how we perceive these systems, how they shape our communication with them and with fellow humans and how we live and work together. AI in any form gives a new twist to the big questions that humanity has concerned herself with for centuries: What is consciousness? How should we treat each other - what is right and what is wrong? How do our creations change the world we are living in? Which challenges do we have to face in the future?
This paper discusses social and ethical challenges that arise through the application of artificial intelligence on the genetic analysis of humans. AI can be used to calculate individuals’ polygenic scores which statistically predict character traits and behavioral dispositions. This technology might foster the selection of unborn children according to the likelihood of engaging in socially compliant behavior or developing a certain level of cognitive abilities. Through exercising moral and cognitive enhancements on individuals and groups, AI-enhanced personal eugenics runs the risk of undermining or destroying essential characteristics of humanity such as moral responsibility and autonomy. It is shown that an argument for the rejection of this technology requires certain philosophical or theological assumptions concerning the essence or teleology of human nature.
There are still many anthropomorphizing misunderstandings about what it means for a machine to be intelligent. In searching for common ground between human and artificial thinking processes, I suggest reconsidering how human intelligence can be conceived in mechanistic terms. To do this, I take a closer look at the notion of the spiritual automaton of the 17th century philosopher Baruch de Spinoza. In his writings, a mechanical theory of the mind can be found, before the comparison between minds and machines even arises. In a second step I sketch some ways in which current methods in AI studies could be seen to reflect aspects of the Spinozistic model of the mind
Rapid developments in the field of Artificial Intelligence prompt many questions and dilemmas; amongst others, they might force us to formulate definitive answers to controversial ethical questions. Future AGI systems will need to possess a certain degree of autonomy to yield the results we aim for, yet their decisions should preferably not be made arbitrarily – especially when it comes to moral dilemma scenarios. Instead, their actions should be guided by moral maxims we consider valid for epistemically justified reasons. The responsibility to equip AGI systems with an appropriate normative framework and certain premises that prevent them from being a threat to humankind is incredibly high. At the same time, our epistemic access to normative questions remains extremely limited. Since the very early days of philosophy, human beings have tried to figure out what they ought to do, and hence what the morally good that we should strive for could possibly be. Although philosophers have kept coming up with new moral theories, claiming to provide an answer or at least a generalizable approach, it appears that a broader consensus on the matter which is not highly culture-bound has never been reached. In this chapter, the compartmentalization of different branches of AI applications is examined as a possible approach to the aforementioned epistemic and normative issues. The core idea is to tackle epistemic and ethical questions before developing technologically powerful tools. Making use of the allegedly higher cognitive capabilities of AI systems in order to expand our epistemic access to the normative, we could eventually figure out how to guide future AGI systems safely, based on a normative position we carefully worked out.
The progress towards a society in which robots are our daily attendants seems to be inevitable. Sharing our workplaces, our homes and public squares with robots calls for an exploration of how we want and need to organize our cohabitation with these increasingly autonomous machines. Not only the question of how robots should treat humans or the surrounding world, but also the questions of how humans should treat robots, and how robots should treat each other, may and should be asked. Considering the Kantian idea that possessing dignity is based on autonomy and the fact that robots are becoming increasingly autonomous and rational, one of these questions might be whether robots can have dignity. Two issues must therefore be addressed before answering the question: 1. What are robots and why should we think about “robot dignity” at all? and 2. What is dignity? The answer to the first question is necessary to understand the object of investigation and will be considered briefly. The second more complex question requires a short glimpse on the existing theories and the history of the term before a proposal will be given on how to understand dignity. Finally, it will be explained why robots cannot be rightly seen as possessors of dignity.
The self-driving car trolley problem has received undue focus in the automation ethics literature. It is unrealistic in two important ways: First, it fails to respect well-established truths of vehicle dynamics relating to tire friction, and second, it misrepresents the information environment that self-driving cars will respond to. Further, the problem induces readers to treat the car as an agent, thereby shielding the agency of car designers and operators from scrutiny. This problem is illustrated through an analysis of the reporting of the first pedestrian fatality caused by a self-driving car, in Tempe, Arizona on March 18th, 2018.