Edmond, Awad; Levine, Sydney; Anderson, Michael; Susan Leigh, Anderson; Conitzer, Vincent; Crockett, M.J.; Everett, Jim A.C.; Evgeniou, Theodoros; Gopnik, Alison; Jamison, Julian C.; Kim, Taw Wan; Liao, S. Matthew; Meyer, Michelle N.; Mikhail, John; Opoku-Agyemang, Kweku; Schaich Borg, Jana; Schroeder, Juliana; Sinott-Armstrong, Walter; Slavkovik, Marija; Tenenbaum, Josh B.
Journal article, Peer reviewed
MetadataShow full item record
Original versionTrends in Cognitive Sciences. 2022, 26 (5), 388-405. 10.1016/j.tics.2022.02.009
Technological advances are enabling roles for machines that present novel ethical challenges. The study of 'AI ethics' has emerged to confront these challenges, and connects perspectives from philosophy, computer science, law, and economics. Less represented in these interdisciplinary efforts is the perspective of cognitive science. We propose a framework – computational ethics – that specifies how the ethical challenges of AI can be partially addressed by incorporating the study of human moral decision-making. The driver of this framework is a computational version of reflective equilibrium (RE), an approach that seeks coherence between considered judgments and governing principles. The framework has two goals: (i) to inform the engineering of ethical AI systems, and (ii) to characterize human moral judgment and decision-making in computational terms. Working jointly towards these two goals will create the opportunity to integrate diverse research questions, bring together multiple academic communities, uncover new interdisciplinary research topics, and shed light on centuries-old philosophical questions.