Using conversational agents and natural language processing to train and evaluate psychologists

This project was a successful recipient of CAIDE's 2023 seed funding round 'Automated Expertise.'


Overview

Students of psychotherapy or online chat counselling trainees need numerous practice sessions to develop their skills. Traditionally these sessions require the involvement of a colleague, qualified psychotherapist, or even trained actor to act as the therapy client. These sessions need to be subsequently evaluated by a practitioner to judge the student’s competence and their fidelity to techniques/approaches.

Advances in natural language processing (NLP) and the modern prominence of chatbots and large language models raise the possibility of employing such automated tools to perform the functions of client simulation and trainee session evaluation aforementioned. Prior to working with real humans, psychotherapy students could initially practice and refine their techniques with chatbots that simulate mental health clients. Furthermore, NLP could assist with evaluating the text generated from such exchanges for fidelity and general quality.

Given this idea and its limited history, we have developed our own chatbot, Client101, which can emulate various individuals presenting with mental health issues. With a prototype now developed, we aim in this project to test Client101 with a group of psychologists/counsellors, and explore questions concerning the ethics, clinical utility, and human-computer interaction of using such a system for clinical training and evaluation.

Following is a list of some questions arising from this project:

  • Can a chatbot (such as Client101) based on current NLP technology generate conversations that suffice to realistically simulate a mental health client for training and evaluation purposes?
  • Can there be a digital analogue of the therapist version of the digital therapeutic alliance, and an accompanying scale, whereby clinicians can rate their therapeutic alliance with a client bot?
  • What is the affective nature of a clinician's interaction with a simulative client chatbot, and how might the ELIZA effect come into play in such scenarios?
  • How might psychotherapeutic boundary transgressions in traditional human client - human clinician psychotherapy/counselling play out in the chatbot client - human clinician setup?
  • Can trainees use such a chatbot appropriately and effectively as an initial training tool without adverse effects, or are there potentially negative consequences in commencing psychotherapy/counselling training via communication with a client chatbot?

Research Team