Skip to main content
padlock icon - secure page this page is secure

Motivational Defeaters of Self-Modifying AGIs

Buy Article:

$22.55 + tax (Refund Policy)

The future of artificial general intelligence (AGI) in general and the risks and promises it may hold in particular are the subject of many recent papers and books by many eminent thinkers. In this paper I discuss a widely held argument in this field of enquiry, i.e. the Intelligence Explosion Argument (IEA), and argue that motivational defeaters render the IEA unsound. I discuss the argument from the prism of self-modifying artificial intelligent systems and demonstrate how certain considerations might cause them, under certain circumstances, to be disinclined to perform self-modifications. The argument I promote in this paper is simple: self-improvement is a risky process. Our AGI is intelligent enough to understand this. Therefore, under normal circumstances, it will be disinclined to self-improve. By normal circumstances I mean circumstances wherein the successful completion of its ultimate goals is not at risk. Since the IEA requires a certain recursive self-improvement on the part of the agent, disinclination to do so will render the IEA unsound.
No References
No Citations
No Supplementary Data
No Article Media
No Metrics

Document Type: Research Article

Affiliations: Email: [email protected]

Publication date: January 1, 2017

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more