Superintelligence as Moral Philosopher
Non-biological superintelligent artificial minds are scary things. Some theorists believe that if they came to exist, they might easily destroy human civilization, even if destroying human civilization was not a high priority for them. Consequently, philosophers are increasingly worried about the future of human beings and much of the rest of the biological world in the face of the potential development of superintelligent AI. This paper explores whether the increased attention philosophers have paid to the dangers of superintelligent AI is justified. I argue that, even if such a thing is developed and even if it is able to gain enormous knowledge, there are several reasons to believe that the motivation of such an AI will be more complicated than what most theorists have supposed thus far. In particular, I explore the relationship between a superintelligent AI's intelligence and its moral reasoning, in an effort to show that there is a realistic possibility that the AI will be unable to act, due to conflicts between various goals that it might adopt. Although no firm conclusions can be drawn at present, I seek to show that further work is needed and to provide a framework for future discussion.
No Supplementary Data
No Article Media
Document Type: Research Article
Affiliations: Department of Philosophy, Saint Joseph's University, 5600 City Ave., Philadelphia, PA 19131 USA., Email: [email protected]
Publication date: January 1, 2017