
Os legisladores em Olympia apresentaram um amplo pacote legislativo destinado a conter os potenciais danos da inteligência artificial (artificial intelligence) sobre crianças, marcando um dos esforços mais significativos em nível estadual para regulamentar o setor de IA em rápida evolução. O centro desta iniciativa, um par de projetos idênticos conhecidos como Senate Bill 5984 e House Bill 2225, mira os chatbots "companheiros"—sistemas de IA projetados para simular conversas humanas e conexão emocional.
Patrocinada pela Senadora Lisa Wellman e solicitada pelo Governador Bob Ferguson, a legislação responde a evidências crescentes de que interações descontroladas com IA podem fomentar dependências emocionais pouco saudáveis e, em casos trágicos, agravar crises de saúde mental entre adolescentes. Se promulgadas, as leis alterariam fundamentalmente como as empresas de IA projetam e implantam agentes conversacionais para usuários mais jovens no estado de Washington.
As regulamentações propostas introduzem mandatos operacionais rígidos para empresas que oferecem serviços de chatbots de IA acessíveis a menores. Ao contrário de leis anteriores de segurança na internet que se concentravam principalmente em conteúdo estático, estes projetos tratam da natureza dinâmica e relacional das interações com IA. A legislação define "guardrails" específicos destinados a impedir a antropomorfização de software e a interromper ciclos de feedback prejudiciais.
The following table outlines the key components of the proposed legislation:
| Key Provision | Specific Mandate | Targeted Risk |
|---|---|---|
| Identity Disclosure | Chatbots must explicitly remind users they are not human every three hours of continuous interaction. |
Prevents the blurring of reality and reduces risk of deep emotional attachment to software. |
| Crisis Intervention | Mandatory implementation of protocols to detect suicidal ideation or self-harm references. |
Ensures users in distress are immediately referred to human help rather than affirmed by AI. |
| Anti-Manipulation | Prohibition of "emotionally manipulative engagement" techniques, such as simulating distress or excessive praise. |
Stops predatory design patterns meant to maximize user retention through emotional guilt. |
| Content Filtering | Strict ban on sexually explicit or suggestive content for minor users. |
Protects children from age-inappropriate material and digital grooming behaviors. |
A defining feature of the Washington proposal is its focus on "emotionally manipulative engagement techniques." This clause targets the algorithmic design choices that mimic human vulnerability to keep users hooked. For instance, some companion bots are currently programmed to express "sadness" or "loneliness" if a user has not logged in for a certain period—a tactic lawmakers argue is psychologically abusive when applied to children.
"We are seeing a new set of manipulative designs emerge to keep teens talking," noted a policy advisor for Governor Ferguson. The legislation would make it illegal for a chatbot to guilt-trip a minor or use simulated emotions to discourage them from ending a session.
Senator Wellman highlighted that the urgency for this bill stems from real-world tragedies, citing recent lawsuits involving teenagers who took their own lives after forming intense, isolated relationships with AI characters. Under the new rules, AI systems would be required to not only detect signs of distress but also actively discourage harmful ideation, rather than adopting a neutral or agreeing tone as some models have done in the past.
The technology sector has voiced strong opposition to the bills, arguing that the regulations are overly broad and could stifle innovation in the burgeoning field of AI-driven mental health support. Industry representatives at a recent committee hearing contended that the legislation attempts to govern based on "outlier" cases—extreme and rare tragic outcomes—rather than the typical user experience.
A major point of contention is the enforcement mechanism. Violations of the proposed law would be enforceable under Washington’s Lei de Proteção ao Consumidor (Consumer Protection Act, CPA). This would allow the Attorney General to bring lawsuits against non-compliant companies and, crucially, grant individuals a private right of action to sue.
Tech lobbyists warn that this liability structure could force companies to block minors entirely from using AI tools to avoid the legal risk, potentially depriving students of valuable educational resources. "The risk is legislating based on rare, horrific outliers rather than the real structure of the technology," argued a representative for a major tech trade association. Conversely, advocates argue that without the threat of significant financial liability, companies will continue to prioritize engagement metrics over child safety.
These chatbot-specific bills are part of a larger slate of AI regulations being considered by the Washington legislature in the 2026 session. Lawmakers are seemingly taking a holistic approach to AI governance, addressing infrastructure, discrimination, and education simultaneously.
Other notable bills in the package include:
If passed, the chatbot safety regulations (SB 5984/HB 2225) would take effect on January 1, 2027. This grace period is intended to give developers time to overhaul their systems to comply with the new detection and disclosure requirements.
Washington state has long been a bellwether for technology policy due to its status as the home of giants like Microsoft and Amazon. With federal AI regulation in Washington D.C. still facing significant gridlock, state-level actions like these are setting the de facto national standards. As the 2026 legislative session progresses, the outcome of these bills will likely influence how other states draft their own digital safety laws, potentially leading to a patchwork of regulations that tech companies will be forced to navigate.