Playing with the “Judge, Jury and Executioner:" Examining the Ethics of AI Moderation in Multiplayer Games

This project was a successful recipient of CAIDE's 2023 seed funding round 'Automated Expertise.'


Overview

Large gaming companies are increasingly turning to AI moderation tools such as ToxMod (Connell, 2023) and Bodyguard.ai (Bradley, 2023) to identify and respond to instances of player toxicity in multiplayer games (Riot Games, 2022). From an industry standpoint, “proactive” tools like these help identify and mitigate online harms (Lewington, 2021) and ease the burden of manual human moderation, which can be costly, time consuming, insufficient and emotionally taxing (Kerr & Kelleher, 2015).

However, these tools can record in-game text and voice data, raising ethical concerns about the accuracy, reach and appropriateness of AI moderation in games alongside or instead of human moderator expertise (Reid et al., 2022). Furthermore, how AI moderation tools work in games remains opaque, prompting many players to express frustration, confusion and concern at automated punishments in response to their behaviours (Kou & Gui, 2017, 2020).

These pressing ethical concerns and our understanding of these emerging issues remain largely unarticulated in academic research. This timely project aims to address this gap by answering the following research questions:

  1. How do players, human moderators, and AI moderation companies understand the role of AI in moderating behaviours in multiplayer games?
  2. What are the key ethical issues facing the use of AI moderation in multiplayer games?

In responding to these questions, we will adopt a qualitative, participatory approach. In phase one, we will conduct 20 individual interviews with players, human moderators, and AI moderation developers to examine participants’ understandings of AI moderation tools. These interviews will be coded in NVivo using reflexive thematic analysis (Braun & Clarke, 2021) and supplemented with content analysis of public-facing documents from AI moderation companies (see Grace et al., 2022).

In phase two, we will build on the findings from these interviews and related research on community- focused AI and moderation (Kerr & Kelleher, 2020; Reid et al., 2022; Seering et al., 2022; Xiao et al., 2022) to conduct 3-4 participatory co-design workshops with 15-20 interviewees, industry experts and researchers. These workshops will explore alternative uses of AI in online game moderation and will inform a framework of ethical considerations for future use of these tools in community management.

This project is novel in its focus on ethical AI moderation in games from the perspective of actual users and professionals. By bringing together various relevant stakeholders, its findings will have an impact on both AI research and industry practice, opening avenues for further work on ethical AI moderation and expertise in the digital age.

Research Team

  • Dr Lucy Sparrow
    Dr Lucy Sparrow

    Associate Lecturer

    School of Computing and Information Systems

    University of Melbourne

  • Dr Mahli-Ann Butt
    Dr Mahli-Ann Butt

    School of Culture and Communication

    University of Melbourne

  • Caitlin Galwey
    Caitlin Galwey

    Juris Doctor Student

    Melbourne Law School

    University of Melbourne