Skip to main content
padlock icon - secure page this page is secure

Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?

Buy Article:

$22.04 + tax (Refund Policy)

Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one (I argue) plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more fully understands how to execute a Singularity in a positive way. It is suggested that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology.
No References
No Citations
No Supplementary Data
No Article Media
No Metrics

Document Type: Research Article

Affiliations: Email: [email protected]

Publication date: January 1, 2012

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more