Unlike other human-made objects, the ability for intelligent systems to exhibit agency, and even appear anthropomorphic, leads to moral confusion about their status in society. As Himma states “If something walks, talks, and behaves enough like me, I might not be justified in thinking that it has a mind, but I surely have an obligation, if our ordinary reactions regarding other people are correct, to treat them as if they are moral agents.” Here, I present an evaluation of the requirements for moral agency and moral patiency. I examine human morality through a presentation of a high-level ontology of the human action-selection system. Then, drawing parallels between natural and artificial intelligence, I discuss the limitations and bottlenecks of intelligence, demonstrating how an ‘all-powerful’ Artificial General Intelligence would not only entail omniscience, but also be impossible. I demonstrate throughout this Chapter how culture determines the moral status of all entities, as morality and law are human-made ‘fictions’ that help us guide our actions. This means that our moral spectrum can be altered to include machines. However, there are both descriptive and normative arguments for why such a move is not only avoidable, but also should be avoided.