
When communicative AIs are cooperative actors: a prisoner’s dilemma experiment on human–communicative artificial intelligence cooperation
This study examined the possibility of cooperation between human and communicative artificial intelligence (AI) by conducting a prisoner’s dilemma experiment. A 2 (AI vs human partner) × 2 (cooperative vs non-cooperative partner) between-subjects six-trial
prisoner’s dilemma experiment was employed. Participants played the strategy game with a cooperative AI, non-cooperative AI, cooperative human, and non-cooperative human partner. Results showed that when partners (both communicative AI and human partners) proposed cooperation on the
first trial, 80% to 90% of the participants also cooperated. More than 75% kept the promise and decided to cooperate. About 60% to 80% proposed, committed, and decided to cooperate when their partner proposed and kept the commitment to cooperate across trials, no matter whether the partner
was a cooperative human or communicative AI. Overall, participants were more likely to commit and cooperate with cooperative AI partners than with non-cooperative AI and human partners.
Keywords: Artificial intelligence; computers are social actors; cooperation; human–AI interaction; human–machine communication; social dilemmas
Document Type: Research Article
Affiliations: Department of Interactive Media, School of Communication, Hong Kong Baptist University, Hong Kong, Hong Kong
Publication date: October 3, 2023
- Editorial Board
- Information for Authors
- Subscribe to this Title
- Ingenta Connect is not responsible for the content or availability of external websites
- Access Key
- Free content
- Partial Free content
- New content
- Open access content
- Partial Open access content
- Subscribed content
- Partial Subscribed content
- Free trial content