California Senate passes bill to make AI chatbots safer
California lawmakers moved more guardrails to a step closer Tuesday around AI-powered chatbots.
The Senate passed a bill that aims to make chatbots safer for companionship because parents fear virtual characters will harm the mental health of their children.
The legislation is now heading to the California Legislature, which shows how state lawmakers deal with AI’s security issues as tech companies unleash more AI-driven tools.
“This country is looking at California leadership again,” said D-Chula Vista, one of the Senate’s lawmakers who introduced the bill.
Meanwhile, lawmakers are trying to balance their concerns that may hinder innovation. Groups opposing the bill, such as the Electronic Border Foundation, said the legislation is too broad and will encounter freedom of speech issues, according to Senate analysis of the bill.
Under Senate Bill 243, the operator of the Companion Chatbot platform will remind users that virtual characters are not human at least every three hours. They will also reveal that companion chatbots may not be suitable for certain minors.
The platform also needs to take other steps, such as implementing a protocol to address suicide intentions expressed by users, suicide or self-harm. This includes suicide prevention resources to users.
Suicide prevention and crisis counseling resources
If you or someone you know is struggling with suicidal thoughts, please ask a professional for help and call 9-8-8. The U.S.’s first triple-digit mental health crisis hotline nationwide will be linked to a trained mental health consultant. In the United States and Canada, the text is 741741 to reach the crisis text line.
Operators of these platforms will also report the number of times a partner chatbot makes suicidal thoughts or user actions and other requests.
Dr. Akilah Weber Pierson, one of the co-authors of the bill, said she supports innovation, but must also carry a “moral responsibility.” The senator said the chatbot was carefully designed to attract attention, including children.
“It's very worrying when kids start to prefer interacting with AI over real human relationships,” said Senator Weber Pierson (D-la Mesa).
The bill defines companion chatbots as AI systems that can meet users' social needs. It does not include chatbots that businesses use for customer service.
The legislation gained support from parents who began chatting with chatbots and lost their children. One of the parents is Megan Garcia, a Florida mom, who sued Google and tarrie.ai after her son, Sewell Setzer III, died last year from suicide.
In the lawsuit, she claimed that the platform’s chatbot hurt her son’s mental health and failed to notify her or provide help when expressing suicide thoughts to these virtual characters.
Role in Menlo Park, California. EA is a platform where people can create and interact with digital characters that mimic real and fictional characters. The company said it requires teenagers’ safety and launched a feature that provides parents with more information on the time their children spend on chatbots on the platform.
Role. EAI asked the federal court to dismiss the lawsuit, but the federal judge allowed the case in May.