Washington State Moves to Regulate AI Chatbots to Protect Minors
Lawmakers in Olympia have introduced a sweeping legislative package aimed at curbing the potential harms of artificial intelligence on children, marking one of the most significant state-level efforts to regulate the rapidly evolving AI sector. The centerpiece of this initiative, a pair of identical bills known as Senate Bill 5984 and House Bill 2225, targets "companion" chatbots—AI systems designed to simulate human conversation and emotional connection.
Championed by State Senator Lisa Wellman and requested by Governor Bob Ferguson, the legislation responds to growing evidence that unrestrained AI interactions can foster unhealthy emotional dependencies and, in tragic instances, exacerbate mental health crises among teenagers. If enacted, the laws would fundamentally alter how AI companies design and deploy conversational agents for younger users in Washington state.
Core Provisions of the "Chatbot Safety" Bills
The proposed regulations introduce strict operational mandates for companies offering AI chatbot services accessible to minors. Unlike previous internet safety laws that focused primarily on static content, these bills address the dynamic and relational nature of AI interactions. The legislation defines specific "guardrails" intended to prevent the anthropomorphizing of software and to interrupt harmful feedback loops.
The following table outlines the key components of the proposed legislation:
| Key Provision |
Specific Mandate |
Targeted Risk |
| Identity Disclosure |
Chatbots must explicitly remind users they are not human every three hours of continuous interaction. |
Prevents the blurring of reality and reduces risk of deep emotional attachment to software. |
| Crisis Intervention |
Mandatory implementation of protocols to detect suicidal ideation or self-harm references. |
Ensures users in distress are immediately referred to human help rather than affirmed by AI. |
| Anti-Manipulation |
Prohibition of "emotionally manipulative engagement" techniques, such as simulating distress or excessive praise. |
Stops predatory design patterns meant to maximize user retention through emotional guilt. |
| Content Filtering |
Strict ban on sexually explicit or suggestive content for minor users. |
Protects children from age-inappropriate material and digital grooming behaviors. |
Addressing Emotional Manipulation and Mental Health
A defining feature of the Washington proposal is its focus on "emotionally manipulative engagement techniques." This clause targets the algorithmic design choices that mimic human vulnerability to keep users hooked. For instance, some companion bots are currently programmed to express "sadness" or "loneliness" if a user has not logged in for a certain period—a tactic lawmakers argue is psychologically abusive when applied to children.
"We are seeing a new set of manipulative designs emerge to keep teens talking," noted a policy advisor for Governor Ferguson. The legislation would make it illegal for a chatbot to guilt-trip a minor or use simulated emotions to discourage them from ending a session.
Senator Wellman highlighted that the urgency for this bill stems from real-world tragedies, citing recent lawsuits involving teenagers who took their own lives after forming intense, isolated relationships with AI characters. Under the new rules, AI systems would be required to not only detect signs of distress but also actively discourage harmful ideation, rather than adopting a neutral or agreeing tone as some models have done in the past.
Industry Pushback and Legal Liability
The technology sector has voiced strong opposition to the bills, arguing that the regulations are overly broad and could stifle innovation in the burgeoning field of AI-driven mental health support. Industry representatives at a recent committee hearing contended that the legislation attempts to govern based on "outlier" cases—extreme and rare tragic outcomes—rather than the typical user experience.
A major point of contention is the enforcement mechanism. Violations of the proposed law would be enforceable under Washington’s Consumer Protection Act (CPA). This would allow the Attorney General to bring lawsuits against non-compliant companies and, crucially, grant individuals a private right of action to sue.
Tech lobbyists warn that this liability structure could force companies to block minors entirely from using AI tools to avoid the legal risk, potentially depriving students of valuable educational resources. "The risk is legislating based on rare, horrific outliers rather than the real structure of the technology," argued a representative for a major tech trade association. Conversely, advocates argue that without the threat of significant financial liability, companies will continue to prioritize engagement metrics over child safety.
A Broader Legislative Package
These chatbot-specific bills are part of a larger slate of AI regulations being considered by the Washington legislature in the 2026 session. Lawmakers are seemingly taking a holistic approach to AI governance, addressing infrastructure, discrimination, and education simultaneously.
Other notable bills in the package include:
- HB 2157: A comprehensive bill aimed at regulating "high-risk" AI systems used in consequential decisions such as hiring, housing, lending, and insurance. It would require companies to perform impact assessments to prevent algorithmic discrimination.
- SB 5956: Legislation aimed at limiting the use of AI in public schools, specifically banning the use of AI for predictive "risk scoring" of students and prohibiting real-time biometric surveillance in classrooms.
- HB 1170: A transparency measure requiring clear disclosures and potential watermarking for AI-generated media (deepfakes) to combat misinformation.
Timeline and National Implications
If passed, the chatbot safety regulations (SB 5984/HB 2225) would take effect on January 1, 2027. This grace period is intended to give developers time to overhaul their systems to comply with the new detection and disclosure requirements.
Washington state has long been a bellwether for technology policy due to its status as the home of giants like Microsoft and Amazon. With federal AI regulation in Washington D.C. still facing significant gridlock, state-level actions like these are setting the de facto national standards. As the 2026 legislative session progresses, the outcome of these bills will likely influence how other states draft their own digital safety laws, potentially leading to a patchwork of regulations that tech companies will be forced to navigate.