Artificial Intelligence (AI) is rapidly becoming a key part of most people’s everyday lives, often without them even realizing it. While only 33% of people think of themselves as AI users, research on usage trends shows that the figure is actually more like 77%.
Beyond the obvious instances of AI use (think ChatGPT, interactive search tools, and image generators), AI is also employed in aiding navigation apps, autocorrect and text suggestions, and of course, determining the adverts we all see as we consume digital media or even just walk past a billboard. Increasingly, the efficiency and low cost that AI models can operate at have made them an attractive option in the healthcare industry, too.
As these models become more and more sophisticated, it becomes harder and harder to justify the expense of employing a human being to care for their fellow humans. But what are the ethical implications of such a decision, in the sphere of speech therapy? This article will outline the potential benefits and issues with AI in speech therapy and what we can do about them.
How is AI useful for speech therapy?
There are a myriad of potentially useful possibilities for AI in speech therapy, although not all of them have been fully developed as technologies yet. For example, automatic speech recognition (ASR) software, in combination with other forms of AI, might allow speech language pathologists (SLPs) to make notes and then auto-fill forms, as they go, allowing them to spend more time with patients.
There are arguments, too, that introducing AI to the diagnostic process might result in more objective assessment (or at the very least, an informed second opinion, if used correctly), and potentially personalised treatment. Indeed, AI-based technologies are currently being developed that could allow voice preservation and enhanced speech synthesis, if the patient were deteriorating and in need of such drastic steps.
What are the issues with it?
So far, so good. However, much attention has been drawn recently to a plethora of possible issues with the use (or at least, over-use) of AI in the field of speech pathology. On a fundamental level, purely by nature of how most AI models are constructed, there is the potential for deep and possibly harmful bias in their output. AI is trained on data, and therefore, the selection of the training data will impact the conclusions that it might draw. Remember, it is not “intelligent” in the same sense that we are (most of us, anyway).
On a related note, the patient’s data (much of which is private and highly sensitive) would under most models become part of the AI system’s dataset, or at least be processed by the model in a way that requires the data’s computation by a third party. These systems are not always secure, and data leaks are a very real risk, while the storage and computation of this data come at a huge environmental cost. Most sickening of all, AI is being employed by “healthcare providers” to assess claims and manage, rather than aid, patients’ treatment, often entirely incorrectly and with an “efficiency” bordering on total contempt for human experience.
How do we resolve them?
A guiding principle, perhaps, for the ethical introduction of AI into the field of speech pathology and therapy, then, might be to ask ourselves, meaningfully: what does a speech language pathologist do? Well, in a healthcare system functioning properly, they would provide empathetic, effective interpersonal treatment and diagnoses for people affected by Articulation Disorders, issues with swallowing, and other developmental issues. To ensure that AI is being incorporated into this vision ethically and effectively, we should continue to focus on these tenets.
Naturally, we must ensure that AI models are consistently delivering reliable results before even considering including them in any stage of the treatment process; likewise, we should be entirely satisfied that the safeguards in place to prevent data protection compromises meet not only the minimum legal standard but a truly safe, secure standard.
Crucially, though, AI must remain (even when fully capable, and successfully implemented) a tool of SLPs, rather than replacing any of their human functions, be that building meaningful interpersonal medical relationships with their patients or taking decisions about the overall direction of their care.