Senator Marsha Blackburn (R-TN) introduced a 291-page legislative discussion draft last week that would repeal Section 230, impose new tort liability on AI developers, and require age verification for AI chatbot providers.

The bill — formally titled "The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry," or TRUMP AMERICA AI Act — is tied to President Trump's December 2025 Executive Order on AI. Blackburn frames it as a replacement for what she calls a "patchwork of state laws" leaving companies subject to inconsistent AI rules across jurisdictions. Enforcement authority would sit across the FTC, DOJ, NIST, and Department of Energy, with state attorneys general and private litigants also granted standing.

The Electronic Frontier Foundation, which has opposed Section 230 repeal in every major legislative cycle since 2018, has flagged the provision as the bill's most far-reaching element. Under the draft, platforms would lose immunity protections for third-party content after a two-year transition — a window the EFF argues is too short for companies to restructure content moderation systems built around existing safe harbors.

The AI liability framework may create the starkest operational pressure for developers. Under the proposed structure, AI companies could face lawsuits for defective design, failure to warn, and deployment of systems deemed "unreasonably dangerous." Key terms — "harm," "foreseeable," and "contributing factor" — are left undefined in the statutory text. <a href="/news/2026-03-14-lawyers-ai-hallucinations-privilege-courts">Regulators and courts</a> would fill those gaps retroactively, meaning a developer could face liability under standards that did not exist when the system shipped. The structural incentive that creates: build systems that refuse more, flag more, and ship less.

The draft absorbs four previously standalone bills: the Kids Online Safety Act (KOSA), the <a href="/news/2026-03-15-hollywood-ai-oscars-deepfakes-jobs">NO FAKES Act</a>, the GUARD Act, and the AI LEAD Act. For agent developers, KOSA's inclusion is the most operationally significant — it would require AI and social platforms to modify personalized recommendation engines and restrict notification and autoplay systems, regulating information delivery infrastructure, not just content. The NO FAKES Act provisions extend liability to platforms hosting unauthorized AI-generated replicas of individuals' voices or likenesses. The GUARD Act mandates age verification at scale — and as privacy researchers have documented, identity data collected for authentication does not simply disappear after the check clears.

The bill also declares that AI training on copyrighted works does not constitute fair use, a provision that would expose most major AI labs to retroactive copyright liability for models already in the market.

The draft has not been formally introduced for a vote. If it advances in anything close to its current form, every company deploying conversational AI agents in the U.S. would face overlapping exposure: KOSA compliance for recommendation systems, NO FAKES Act liability for voice or likeness features, and open-ended tort risk for any output a court later deems harmful under definitions that do not yet exist.