Skip to main content

Should Humanity Build a Global AI Nanny to Delay the Singularity Until It’s Better Understood?

Buy Article:

$27.68 plus tax (Refund Policy)

Abstract:

Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one (I argue) plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more fully understands how to execute a Singularity in a positive way. It is suggested that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology.

Document Type: Research Article

Affiliations: Email: ben@goertzel.org

Publication date: January 1, 2012

imp/jcs/2012/00000019/F0020001/art00006
dcterms_title,dcterms_description,pub_keyword
6
5
20
40
5

Access Key

Free Content
Free content
New Content
New content
Open Access Content
Open access content
Subscribed Content
Subscribed content
Free Trial Content
Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more