
Publication details
Publisher: Springer
Place: Berlin
Year: 2018
Pages: 169-188
Series: Philosophy & Technology
Full citation:
, "The artificial moral advisor", Philosophy & Technology 31 (2), 2018, pp. 169-188.


The artificial moral advisor
the "ideal observer" meets artificial intelligence
pp. 169-188
in: Anna L. Hoffmann (ed), Countercultures of data, Philosophy & Technology 31 (2), 2018.Abstract
We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the "artificial moral advisor" (AMA). The AMA would implement a quasi-relativistic version of the "ideal observer" famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth's ideal observer. Like Firth's ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth's observer, the AMA is non-absolutist, because it would take into account the human agent's own principles and values. We argue that the AMA would respect and indeed enhance individuals' moral autonomy, help individuals achieve wide and a narrow reflective equilibrium, make up for the limitations of human moral psychology in a way that takes conservatives' objections to human bioenhancement seriously, and implement the positive functions of intuitions and emotions in human morality without their downsides, such as biases and prejudices.
Publication details
Publisher: Springer
Place: Berlin
Year: 2018
Pages: 169-188
Series: Philosophy & Technology
Full citation:
, "The artificial moral advisor", Philosophy & Technology 31 (2), 2018, pp. 169-188.