
The intersection of artificial intelligence and federal governance has reached a pivotal moment as the Trump administration unveils plans to deploy Google’s Gemini AI for drafting federal regulations. In a move described by officials as a "revolution" in rulemaking, the U.S. Department of Transportation (DOT) is positioning itself as the vanguard of this automated shift, aiming to drastically compress the time required to create complex regulatory frameworks.
This initiative, which marks a significant departure from traditional bureaucratic processes, seeks to leverage the generative capabilities of Large Language Models (LLMs) to produce draft regulations in a fraction of the usual time. While proponents argue this will eliminate bottlenecks and modernize government efficiency, the strategy has ignited a firestorm of debate regarding the safety, accuracy, and legal integrity of delegating high-stakes governance to algorithms.
At the heart of this initiative is a fundamental shift in the philosophy of regulatory quality. Gregory Zerzan, the DOT’s general counsel, has reportedly championed a doctrine prioritizing speed and volume over meticulous perfection. During internal meetings, Zerzan emphasized that the agency does not require "the perfect rule" or even a "very good rule," but rather one that is "good enough."
This approach aligns with a broader strategy to "flood the zone" with new regulations, utilizing AI to bypass the human "choke point" that typically slows down the drafting process. Under this new paradigm, the DOT aims to accelerate the timeline from concept to a complete draft ready for review by the Office of Information and Regulatory Affairs (OIRA) to just 30 days—a process that traditionally spans months or years.
The reliance on Google Gemini is central to this acceleration. Officials claim that the AI model can generate a draft rule in approximately 20 minutes, a feat that would fundamentally alter the pacing of federal rulemaking. However, this focus on velocity raises critical questions about the depth of legal analysis and technical scrutiny applied to rules that govern essential safety standards for aviation, pipelines, and rail transport.
The technical implementation of this plan involves using a version of Google’s Gemini to draft the bulk of regulatory text. During a demonstration in December 2025, a presenter—identified by attendees as likely being Acting Chief AI Officer Brian Brotsos—showcased the model's ability to generate a "Notice of Proposed Rulemaking" based solely on topic keywords.
The demonstration highlighted both the potential and the pitfalls of current generative AI technology:
The proposal suggests a future where human regulators shift from authors to auditors, monitoring "AI-to-AI interactions" rather than engaging in deep substantive drafting. This model presumes that the efficiency gains outweigh the risks associated with AI "hallucinations"—confidently stated but factually incorrect outputs common in generative models.
The rapid integration of AI into safety-critical rulemaking has drawn sharp criticism from internal staff and external experts. The primary concern is the reliability of LLMs in interpreting complex statutory requirements and case law without human-level reasoning.
Mike Horton, the DOT’s former acting chief AI officer, offered a stark critique, comparing the initiative to "having a high school intern" draft federal regulations. His warning underscores the potential consequences of errors in sectors where regulations directly impact human safety. "Going fast and breaking things means people are going to get hurt," Horton stated, referencing the Silicon Valley mantra that the DOT appears to be adopting.
Current staff members have also expressed alarm, noting that the "human in the loop" role described by leadership may be insufficient to catch subtle but legally significant errors generated by the AI. The fear is that the sheer volume of AI-generated text could overwhelm human reviewers, leading to a rubber-stamping process that creates vulnerabilities in the federal regulatory framework.
The divergence in perspective between the administration's technology boosters and safety advocates is profound. The following table outlines the conflicting core arguments shaping this policy shift.
| Perspective | Key Arguments | Primary focus | Representative Stance |
|---|---|---|---|
| Proponents | AI eliminates bureaucratic bottlenecks; "Good enough" drafts are sufficient for initial stages; Humans slow down the process. | Speed, Volume, Efficiency | Gregory Zerzan: "We want good enough... We're flooding the zone." |
| Skeptics | LLMs lack legal reasoning and accountability; Hallucinations pose safety risks; Complex regulations require deep expertise. | Safety, Accuracy, Legality | Mike Horton: "Having a high school intern... doing your rulemaking." |
| Legal Experts | High-volume generation does not equal high-quality decision-making; Risk of violating administrative law standards. | Administrative Integrity | Bridget Dooling: "Words don't add up to a high-quality government decision." |
The DOT’s initiative is not an isolated experiment but part of a wider push by the Trump administration to embed artificial intelligence into the federal apparatus. Following a series of executive orders supporting AI development, the administration has signaled a clear intent to utilize technology to deregulate and restructure government operations.
This aligns with proposals from external advisory bodies, such as the Department of Government Efficiency (DOGE) associated with Elon Musk, which has advocated for using automated tools to drastically reduce the federal regulatory code. The "point of the spear" rhetoric used by DOT officials suggests that the Transportation Department is serving as the pilot program for a government-wide adoption of AI drafting tools.
The deployment of Google Gemini by the DOT represents a high-stakes test case for the utility of generative AI in public administration. While the promise of expediting the sluggish rulemaking process is undeniable, the strategy tests the limits of current AI reliability. As the agency moves forward with "flooding the zone," the tension between the demand for speed and the imperative for safety will likely define the next era of federal regulation. For the AI industry, this serves as a critical observation point: seeing whether a general-purpose LLM can truly master the nuance of federal law, or if the "hallucinations" of the machine will lead to real-world consequences.