Skip to main content
padlock icon - secure page this page is secure

Ethics, regulation and the new artificial intelligence, part II: autonomy and liability

Buy Article:

$53.00 + tax (Refund Policy)

This is the second article in a two-part series on the social, ethical and public policy implications of the new artificial intelligence (AI). The first article briefly presented a neo-Durkheimian understanding of the social fears projected onto AI, before arguing that the common and enduring myth of an AI takeover arising from the autonomous decision-making capability of AI systems, most recently resurrected by Professor Kevin Warwick, is misplaced. That article went on to argue that, nevertheless, some genuine and practical issues in the accountability of AI systems that must be addressed. This second article, drawing further on the neo-Durkheimian theory, sets out a more detailed understanding of what it is for a system to be autonomous enough in its decision making to blur the boundary between tool and agent. The importance of this is that this blurring of categories is often the basis, the first article argued, of social fears.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
No Metrics

Keywords: ACCOUNTABILITY; ARTIFICIAL INTELLIGENCE; AUTONOMY; DECISION MAKING; DIGITAL AGENTS; EMILE DURKHEIM; ETHICS; INSTITUTIONS; JUDGEMENT; KEVIN WARWICK; MARY DOUGLAS; REGULATION; ROBOTICS; TECHNOLOGICAL RISK

Document Type: Research Article

Publication date: October 1, 2001

More about this publication?
  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
X
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more