When AI chatbots do work that humans need a license for, like mental health therapy, that comes with opportunities and potential risks.
That was top of mind when Utah’s Office of Artificial Intelligence Policy launched a year ago.
The new office set out to protect the public, encourage innovation and learn from fast-changing AI technology, said Margaret Woolley Busse, executive director of the Utah Department of Commerce, who created the office.
Its first priority: zeroing in on where artificial intelligence meets mental health treatment.
So far, the office has pushed for state-level regulation of mental health chatbots, developed best practices for AI mental health products and offered guidance to mental health professionals.
In June, LaShawn Williams, a licensed clinical social worker and president of the Mormon Mental Health Association, received a letter about best practices for using AI in mental health care from the Office of AI Policy and the Division of Professional Licensing.
It mirrored what she and other providers already knew, she said, but she was still pleased to see it.
“There has been a lot of conversation, and maybe some trepidation, about AI and mental health and AI in mental health as a resource,” Williams said. “So I'm glad to see that we are talking about it and checking in with the fear of over-reliance, either as professionals or as consumers of mental health content.”
If therapists become too reliant on AI for developing treatment plans or stop trusting their instincts, that would be an issue, she said.
But the technology can be an asset. Clients sometimes use chatbots to analyze a text message, for instance, and Williams asks them to bring the response to the session to process their emotions.
“I think it can go a long way when ethically and effectively used to empower our clients as a tool and a resource outside of sessions,” she said.
But AI platforms shouldn’t replace working with a therapist altogether, she said.
“I have Gatorades in my office to go walk and talk with clients outside, because as a therapist, you've got to be able to attune to what clients need and offer that in the moment. That's not something that AI is able to do — yet,” she said.
Keeping human therapists in the loop was important, said Zach Boyd, director of the Office of Artificial Intelligence Policy.
He said they provided guidance to licensed professionals, “who had many questions about where they were allowed to experiment and try new techniques, and where we, as the state, considered something to be crossing an ethical line.”
The office also wanted to get regulations on the books.
To guide mental health chatbots toward benefiting Utahns rather than harming them, it recommended requirements for AI developers and consumer protections. Those became law in HB452.
Since there are no standards for an AI product to get licensed like a human does, the law lays out best practices. An AI mental health product cannot pose a greater risk to the user than a human therapist, and the chatbot must involve therapists in its development
At the same time, another task for the state, Woolley Busse said, is making sure regulations don’t get in the way of innovation.
“There's also places where research and progress could be limited because of the current regulations we have in place that never contemplated products or technologies like generative AI,” she said.
That’s why the state has a regulatory agreement with AI companion ElizaChat, which is specifically designed not to overlap with the work of a licensed professional, Boyd said.
“They were just concerned [that] because of the new nature of the technology, it might accidentally go outside of the box that they designed it to stay in,” he said. “And so we agreed with them in exchange for data sharing and committing to a safety plan that we felt comfortable with, that we would give them 30 days to self-cure any problems that may arise from the bot going outside of its domain.”
Going forward, the office has its eye on deepfakes, or AI-generated videos. Some are jokes, Woolley Busse said, but the problem arises when people create videos with the intent to deceive. So they’re looking at ways for well-intentioned users to identify their content as authentic, because the technology to detect bad actors is still in its early stages.
“Lying is a really old technology, and it doesn't seem like it's going to go away anytime soon,” Boyd said.
The office can’t handle everything, Woolley Busse said, but it’s designed to tackle issues as they come. Down the line, they may look at AI in education and health care more broadly.
The challenge, according to Boyd, is keeping up with the pace of change.
Macy Lipkin is a Report for America corps member who reports for KUER in northern Utah.