Bounding the impact of AGI
Humans already have a certain level of autonomy, defined here as capability for voluntary purposive action, and a certain level of rationality, i.e. capability of reasoning about the consequences of their own actions and those of others. Under the prevailing concept of artificial general intelligences (AGIs), we envision artificial agents that have at least this high, and possibly considerably higher, levels of autonomy and rationality. We use the method of bounds to argue that AGIs meeting these criteria are subject to Gewirth's dialectical argument to the necessity of morality, compelling them to behave in a moral fashion, provided Gewirth's argument can be formally shown to be conclusive. The main practical obstacles to bounding AGIs by means of ethical rationalism are also discussed.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
Document Type: Research Article
Affiliations: Computer and Automation Research Institute, Hungarian Academy of Sciences, Budapest, Hungary
Publication date: July 3, 2014