
Robustness to Fundamental Uncertainty in AGI Alignment
The AGI alignment problem has a bimodal distribution of outcomes with most outcomes clustering around the poles of total success and existential, catastrophic failure. Consequently, attempts to solve AGI alignment should, all else equal, prefer false negatives (ignoring research programmes
that would have been successful) to false positives (pursuing research programmes that will unexpectedly fail). Thus, I propose adopting a policy of responding to points of philosophical and practical uncertainty associated with the alignment problem by limiting and choosing necessary assumptions
to reduce the risk of false positives. Herein I explore in detail two relevant points of uncertainty that AGI alignment research hinges on -- metaethical uncertainty and uncertainty about mental phenomena -- and show how to reduce false positives in response to them.
Document Type: Research Article
Affiliations: Email: [email protected]
Publication date: January 1, 2020
- Access Key
- Free content
- Partial Free content
- New content
- Open access content
- Partial Open access content
- Subscribed content
- Partial Subscribed content
- Free trial content