Skip to main content
padlock icon - secure page this page is secure

Universal empathy and ethical bias for artificial general intelligence

Buy Article:

$60.00 + tax (Refund Policy)

Rational agents are usually built to maximise rewards. However, artificial general intelligence (AGI) agents can find undesirable ways of maximising any prior reward function. Therefore, value learning is crucial for safe AGI. We assume that generalised states of the world are valuable – not rewards themselves, and propose an extension of AIXI, in which rewards are used only to bootstrap hierarchical value learning. The modified AIXI agent is considered in the multi-agent environment, where other agents can be either humans or other ‘mature’ agents, the values of which should be revealed and adopted by the ‘infant’ AGI agent. A general framework for designing such empathic agent with ethical bias is proposed as an extension of the universal intelligence model as well. Moreover, we perform experiments in the simple Markov environment, which demonstrate feasibility of our approach to value learning in safe AGI.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
No Metrics

Keywords: AIXI; empathy; multi-agent environment; representations; safe AGI

Document Type: Research Article

Affiliations: AIDEUS, St. Petersburg, Russia

Publication date: July 3, 2014

More about this publication?
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more