This article was first published in March 2023
Technology has the potential to transform the delivery of therapy (consider as an example the expansion of video-conference therapy during the pandemic). Artificial intelligence chatbots can generate detailed, coherent responses tailored to specific queries drawing on vast amounts of source data. They could, for example, be asked to write treatment plans or session notes based on brief prompts.
While chatbots offer some opportunity for efficiency, there are also several risks to their use. These include the potential for incorrect information to be outputted and lack of clinical or ethical judgment in generating results.
Registrants should exercise caution when integrating new technologies into their practice, as responsibility ultimately remains with an RP for meeting standards of practice. Specific areas of concern relating to AI chatbots are as follows.
Confidentiality and consent
Registrants are responsible for ensuring that their use of any platforms for storing or processing client information is compliant with relevant privacy legislation (Standard 3.1 – Confidentiality). Registrants should familiarize themselves with the terms of use of any information service. In the absence of a secure, confidential platform, registrants must refrain from inputting any information that could identify a client without their informed consent (Standard 3.2 – Consent, Standard 3.4 – Electronic Practice).
Competence, accuracy, and bias
Registrants must not take for granted that information produced by a chatbot is factually accurate or clinically appropriate. Rather, registrants must validate the information using their own knowledge, skill, and judgment, seeking self-study, consultation, or supervision as appropriate (Standard 2.1 – Seeking Consultation, Clinical Supervision, and Referral).
As RPs will be held accountable for treatment decisions regardless of the use of technology, it is important they hold the necessary competencies to appropriately review an AI-generated output and not use the technology as a foundation for expanding practice areas without appropriate education or supervision (Standard 2.1 – Seeking Consultation, Clinical Supervision, and Referral).
Similarly, AI-generated responses can be biased, as they are dependent on the data and algorithms that developers used to produce them. Again, registrants must critically review any information before relying on it.
Registrants can address some of these risks by considering AI chatbots as a general tool for gathering and synthesizing non-identifying information rather than depending on the service in specific client cases.