Skip to main content

Bounding the impact of AGI

Buy Article:

$71.00 + tax (Refund Policy)

Humans already have a certain level of autonomy, defined here as capability for voluntary purposive action, and a certain level of rationality, i.e. capability of reasoning about the consequences of their own actions and those of others. Under the prevailing concept of artificial general intelligences (AGIs), we envision artificial agents that have at least this high, and possibly considerably higher, levels of autonomy and rationality. We use the method of bounds to argue that AGIs meeting these criteria are subject to Gewirth's dialectical argument to the necessity of morality, compelling them to behave in a moral fashion, provided Gewirth's argument can be formally shown to be conclusive. The main practical obstacles to bounding AGIs by means of ethical rationalism are also discussed.

Keywords: ethical rationalism; formal verification; principle of generic consistency

Document Type: Research Article

Affiliations: Computer and Automation Research Institute, Hungarian Academy of Sciences, Budapest, Hungary

Publication date: 03 July 2014

More about this publication?
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content