Under what circumstances might AI systems have moral standing: when might they have rights or other morally relevant attributes that will constrain how we should treat them? Current approaches to this question assume either that AIs will have a special (dilute) form of moral standing that does not resemble human rights; or that they will acquire moral rights resembling those of human beings only after they pass an ill-defined and technically difficult watershed, such as the acquisition of phenomenal consciousness. This paper argues that there is another, more tractable, stan- dard, according to which AI systems will arrive at moral standing, unambiguously and quite soon: this will happen when they satisfy the criteria for selfhood, as these criteria are applied to human beings. I consider the four main theories of personal identity --psychological continuity, bodily theories, narrative/hermeneutical the ories, and non-identity theories -- and show that any one of these plausibly will apply to near-future Ais. I further argue that constituting a self ipso facto bestows some form of moral standing, and propose a research program for understanding the consequences for how we morally should treat near-future AI systems.
No Supplementary Data
No Article Media
Document Type: Research Article
Affiliations: Department of Philosophy University of Guelph, Canada, Email: [email protected]
Publication date: January 1, 2020