As large language models continue to impact various professional domains, it's critical to address ethics. This is especially true in clinical and accessibility work, doubly so as someone building (sometimes hesitantly) such tools. As a developer, speech pathologist, and stakeholder in other ways, I've been thinking a lot about this. It's important to outline the perspectives and guidelines to which I hold myself - and propose that others should - regarding the responsible use of the tools I've developed.
This is particularly important as I begin to broadly release such tools, and I am very conscious of the irony in that statement. I hope that you will allow me to borrow a moment of your time.
Language Models as Pointer Dogs
Because many of those coming here are educators, clinicians, and assistive technology professionals, I will briefly say the obvious: Tools built with generative language cannot fully grasp the complex needs of a patient or student as a human can. Not by a long shot, really. Outputs are built from data and algorithms that by definition are not accurate, comprehensive, or appropriate for every situation. While I have tools to help with things like funding and billing, I discourage their use for curriculum or other human-centric work.
Always consult your professional training and judgment when using these tools. Think of them as pointer dogs, not sharpshooters—they can show you where to look, but it's up to you to get it right.
The immediate benefit of language models, however, is overwhelming in accessibility. I have tools addressing communication, home automation, and general access. One system, for example, is designed to create eye gaze interfaces on the fly - it's not bad, and it's getting better. As a very wise woman once said: "What makes things easier for everyone, makes them possible for others."
Language models offer enormous opportunities, not least of which is the chance to seize the narrative in a way that benefits those who are among the most marginalized. I feel a personal obligation to get it right.
Ethics in Unethical Times
Some professions, such as graphic design and journalism, experience far more risk than benefit. My systems do not engage in generative imagery, sidestepping two of the most controversial aspects of these applications—specifically, the co-opting of human creativity and the potential generation of fake or harmful visual content. For a field-specific example, I am not at all comfortable with the idea of symbols being generated for communication systems. It would be very easy to lose or distort meaning among vulnerable populations.
The question of water and energy use is also challenging. I have chosen not to engage in highly energy-demanding processes, certainly not generative imagery or long-form writing and journalism. I focus solely on "grounded" language models—those tied directly to, and only to, knowledge they receive in real time from sources that I (or, more accurately and happily, users) decide.
There are other issues to address such as plagiarism, job loss, and long-term social cost; large language models took every cultural and institutional bias in our language and baked it into the foundation of future technology, and they are emphatically not "AI" - "actually useful ai" is intended as a bit of a jab. To build a model of the mind and expect thought is like building a model of the sea and expecting to get wet.
I acknowledge these challenges and strive to create tools that mitigate these risks as much as possible. I build with targeted, specialized functions in mind, which allows for more efficient use of resources via so-called "mixtures of experts" (MoEs)—that sleep or wake to optimize for specific tasks. None of them, however, write position statements on the ethics of their own design; don't worry, as I'm sure someone will, that a language model wrote this statement about itself.
In fact, the majority of the models I've trained were built from the past communication of adult communicators with degenerative disorders. While hallucination is still possible, I have a handful of controls; you'd be amazed what can be accomplished by bringing someone's entire corpus of communication into language system design, which can now feasibly run on many users' own computers. The most powerful models I run consume about the same amount of resources as an Xbox.
At a very high level, my goal is to reduce the latency between intention and outcome in all domains — which just happens to be the whole point of accessibility. All technology starts as assistive technology, and I can say with some authority - having worked in both language modeling and voice synthesis — this has been no different.
Riding a Runaway Train
The ethics of language models in education, healthcare, and accessibility are evolving … well, I'm sure you'd agree it's a bit too fast to keep up. I'm committed to building tools with integrity and ethical responsibility. That said, any tool's ethics and safety are a shared responsibility between creators and users. Everyone has the option of utilizing them wisely, ethically, legally, and professionally. Unfortunately, they could also choose not to.
I invite anyone and everyone to provide feedback and join the ongoing dialogue about ethical practices in this overlap of fields. These assistants are here to augment, never to replace. Always be mindful of their limitations, ensure compliance with professional standards, and rely on your professional knowledge to make final decisions.
But let's change the world a bit, eh?
Disclaimer: Professional Guidance Required
These AI tools are designed for informational and supportive purposes only. They should not be considered a substitute for professional advice in any domain. Always consult certified professionals for guidance in their respective fields.
Healthcare and Clinical Work
Information provided is for support only and should not be construed as medical advice, diagnosis, or treatment. Always seek the advice of licensed medical professionals regarding health conditions and treatment options.
Legal and Professional Standards
While these tools provide data-driven insights, they cannot account for all variables or specific circumstances. Any decisions involving significant risk or professional obligations should be made in consultation with appropriate experts and in compliance with relevant regulations.
For more information on ethical AI practices, please refer to the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.