pexels.com/ Jakub Zerdzicki

California’s Chatbot Laws Confront The Risks of Friendly AI

Artificial intelligence-powered assistants, companions and chatbots are quickly becoming the norm online. These chatbots mimic human interactions as much as possible to make users feel comfortable and to promote efficiency. Unfortunately, sometimes this friendliness goes too far, and users uncritically follow the chatbots’ advice or even fall in love with them. This risk is especially acute for those working through mental health struggles. California’s companion chatbot law, SB 243, aims to address some of these risks to vulnerable populations.

Which Chatbots and Digital Assistants Are Subject to the Law? 

The law regulates AI chatbots, usually large language models, that provide adaptive, “human-like responses” and interact socially or emotionally with users, including through anthropomorphic features and sustained relational interactions with users. For example, chatbots that “remember” users across multiple interactions and provide human-like responses are more likely to be subject to the law. 

The law does provide several exemptions:

  • Business utility/customer service chatbots: This includes, for example, an automated customer service bot equipped with preloaded responses that does not “remember” a particular user. 
  • AI-powered consumer devices/“smart” home devices: Stand-alone, voice-controlled AI-powered consumer devices such as home management devices, including smart refrigerators and standard digital assistants, are exempt.
  • Video game characters: Nonplayer characters, bots and avatars within video games that are not capable of discussing sexually explicit topics or mental health issues or interacting with a user outside of the video game are exempt. 

When Capabilities Outstrip Purpose

In some cases, an AI-powered chatbot may have capabilities beyond original intended use cases. For example, it may be able to remember users but not utilize that memory in specific interactions. Or it may be capable of very personal interactions but be utilized primarily to assist shoppers locating an item. The ability to act as a companion chatbot can create concerns, even if it isn’t intended to use that ability. The law will likely treat such AI as a companion chatbots unless they clearly fit within an exemption.

Again, companies typically design chatbots to mimic human interaction as much as possible. As AI fuels rapid capability expansion, the number of chatbots subject to the law likely will increase accordingly. As such, designers and deployers of AI-powered products will need to guard against over-designing chatbots for functionality beyond their intended purposes. Of course, as AI receives additional real-life or other training, its growing capabilities may need to be monitored.

If you spend any time talking with teams who are trying to bring AI into their customer experience, a pattern shows up pretty quickly. The excitement is real, no doubt. People want to automate campaigns, make support smarter, tighten up personalization…  Continue reading

Requirements and Compliance

The law involves several ongoing compliance components: 

  • Notice and transparency: Companion chatbots must explicitly inform users that they are NOT interacting with a human.
  • Safety protocols: Chatbots must be prevented from engaging in interactions that could promote suicidal ideation, suicide or self-harm through safety protocols. The law does not offer specific protocols, but at a minimum, they must interrupt or otherwise restrict the chatbot’s ability to engage in interactions involving suicidal ideation or self-harm topics, as well as provide a conspicuous referrals to crisis resources if a user attempts to engage the chatbot in such interactions. The protocols cannot be aspirational or ambiguous since the law requires operators to post the safety protocols on their websites.
  • Mandatory reporting: Companion chatbots must report to California’s Office of Suicide Prevention annually the number of times the operator issued a suicide or self-harm referral; a description of safety protocols implemented to detect, remove and respond to users’ suicidal ideation; and an accounting of the protocols that prohibit the chatbot from responding to or engaging with suicidal ideation or self-harm topics.

Since the law applies to any companion chatbot operator (whether internally developed or not), operators should implement logging and recordkeeping processes for reportable elements and work with designers, including vendors, to confirm that appropriate notices are provided and safety protocols deployed. Companies that utilize such chatbots may be targeted by the law even if they did not design the chatbot.

Special Requirements When Interacting with Minors

Not surprisingly, additional compliance obligations apply for companion chatbot interactions with known minors. Companion chatbots must, if applicable, prominently disclose that the chatbot may not be suitable for minors. If interacting with a known minor, the chatbot must present clear and conspicuous alerts every three hours reminding the user that they are interacting with AI, not a human, and suggest the user take a break. Additionally, the chatbot must deploy measures that prevent it from producing sexually explicit material or directly instructing a minor to engage in sexual conduct.

Potential Liability for Failure to Comply: The Private Right of Action

The law permits individuals to sue operators of companion chatbots for any violation, even without showing actual harm. In addition to injunctive relief (e.g., requiring removal of a chatbot), financial penalties can include up to $1,000 per violation plus attorneys’ fees. What constitutes a single violation is unclear but could be interpreted as each chatbot used, each individual using a chatbot, and/or each visit to a website or app that utilizes a chatbot. In short, the law could prove a significant source of class action litigation and operator liability. 

Effective Dates

  • January 1, 2026: Most requirements took effect on this date, including notice and transparency requirements and mandatory safety protocols.
  • July 1, 2027: Reporting requirements become effective.

What Operators Must Do Now

If your company has or might deploy a chatbot in the near future, implement steps to evaluate its full capabilities. If they include human-like interactions that do not fit neatly into an exception, further investigation and compliance with the law may be required. This evaluation is not, however, a one-and-done process but requires the ability to track evolving chatbot capabilities, including those that might be offered by a vendor.

While the federal government is looking at preempting state law regarding AI, recent tragic events involving AI-powered chatbots and suicidal ideation suggest that increased regulation of AI-powered companion chatbots may continue, so don’t be surprised if this is just the first of numerous regulations. Creating a process for review and approval of new AI is therefore a must-have.

Picture of By Roy Wyman and Joelle L. Hupp

By Roy Wyman and Joelle L. Hupp

Roy Wyman is a member and Joelle L. Hupp is an associate at Bass, Berry & Sims PLC. Roy brings 30 years of experience advising commercial entities on complex data privacy, cybersecurity and regulatory matters, including HIPAA, GDPR, TCPA and CCPA compliance. Joelle counsels clients on information governance, data security and domestic and international privacy regulations, including GDPR and evolving U.S. state privacy laws.

All Posts

More
Articles

[ninja_form id=16]

SEARCH OUR SITE​

Search

GET THE LATEST ISSUE IN YOUR INBOX​

SIGN UP FOR OUR NEWSLETTER NOW!​

* indicates required

 

We hate spam too. You'll get great content and exclusive offers. Nothing more.

TOP POSTS THIS WEEK

INNOVATION & TECH TODAY - SOCIAL MEDIA​