Skip to main content
padlock icon - secure page this page is secure

Artificial Selves

Buy Article:

$23.48 + tax (Refund Policy)

Under what circumstances might AI systems have moral standing: when might they have rights or other morally relevant attributes that will constrain how we should treat them? Current approaches to this question assume either that AIs will have a special (dilute) form of moral standing that does not resemble human rights; or that they will acquire moral rights resembling those of human beings only after they pass an ill-defined and technically difficult watershed, such as the acquisition of phenomenal consciousness. This paper argues that there is another, more tractable, stan- dard, according to which AI systems will arrive at moral standing, unambiguously and quite soon: this will happen when they satisfy the criteria for selfhood, as these criteria are applied to human beings. I consider the four main theories of personal identity --psychological continuity, bodily theories, narrative/hermeneutical the ories, and non-identity theories -- and show that any one of these plausibly will apply to near-future Ais. I further argue that constituting a self ipso facto bestows some form of moral standing, and propose a research program for understanding the consequences for how we morally should treat near-future AI systems.
No References
No Citations
No Supplementary Data
No Article Media
No Metrics

Document Type: Research Article

Affiliations: Department of Philosophy University of Guelph, Canada, Email: [email protected]

Publication date: January 1, 2020

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more