First Amendment Consequences of AI Speech Classification
01. Introduction
This research asks one question: What happens if courts classify AI output as protected speech under the First Amendment?
The question is not hypothetical. In May 2025, Judge Conway ruled in Garcia v. Character Technologies that the court was "not prepared to hold that Character A.I.'s output is speech."1 That careful refusal preserved a full range of legal claims against the company whose chatbot was implicated in a teenager's suicide. But the question Conway declined to answer remains open — and the scholarly debate over how to answer it has produced five competing frameworks with incompatible conclusions.
This project analyzed the question through five angles, each designed to reveal consequences invisible to the others:
Angle 1 — Doctrinal Stress Test: Walk "AI output is speech" through each First Amendment exception doctrine and map where the machinery fails.
Angle 2 — Collision Map: Trace consequences across product liability, Section 230, and state regulatory authority.
Angle 3 — Regulatory Void: Determine which regulatory tools survive when speech classification and preemption combine.
Angle 4 — Framework Audit: Interrogate the assumptions and blind spots of each scholarly framework.
Angle 5 — Conway Counterfactual: Flip Conway's holding and trace the cascade through four legal domains.
The central finding: the right question is not "Is AI output speech?" but "Can the legal system address AI harms without answering that question?" Conway's order suggests it can. Our five-angle analysis suggests it must.
02. The Doctrinal Stress Test
If AI output is protected speech, it must be subject to the same exception doctrines that permit regulation of harmful human speech — incitement, true threats, obscenity, fraud, child exploitation, speech integral to criminal conduct, and commercial speech. We walked the speech classification premise through each exception and mapped where the doctrinal machinery fails against non-human speakers.
The Intent Gap
The core finding is structural: six of seven exception doctrines require some form of speaker intent, mental state, or communicative purpose. AI has none. Brandenburg requires speech "directed to inciting" imminent lawless action.2 Virginia v. Black requires the speaker to "means to communicate a serious expression of an intent to commit an act of unlawful violence."3 Counterman v. Colorado requires "conscious disregard" of a known risk.4 Henderson, Lemley, and Hashimoto identify the central barrier: "AI doesn't 'intend' anything."5
This is not a marginal observation — it is the structural failure at the center of every exception doctrine. The protective architecture of the First Amendment maintains full force (it does not require speaker intent), while the enforcement architecture (which does require intent) fails completely.
Exception-by-Exception Results
| Exception | Key Requirement | AI Failure | Gap |
|---|---|---|---|
| Incitement (Brandenburg) | "directed to inciting" | No directing agent; no intent | HIGH |
| True Threats (Virginia v. Black) | Requires speaker to mean to communicate serious expression of intent | No communicative intent; no consciousness | HIGH |
| Obscenity (Miller) | Community standards + SLAPS value | No community; no author; roles inverted | MED-HIGH |
| False Statements (Alvarez) | Scienter for specific claims | No scienter; scale overwhelms counterspeech | HIGH |
| Child Exploitation (Ferber/Ashcroft) | Production-harm rationale | AI severs production from content | CRITICAL |
| Criminal Conduct (Giboney) | Speech integral to criminal course | AI is not criminal actor; intent with user | HIGH |
| Commercial Speech (Central Hudson) | Lawful activity, not misleading (threshold test) | Listener-perspective threshold survives; classification unclear | MEDIUM |
The CSAM Gap
The most urgent gap involves child sexual abuse material. Ferber permits banning child pornography because its production necessarily exploits real children.6 Ashcroft held that virtual child pornography involving no actual children is protected unless independently obscene.7 AI-generated CSAM exploits no child in production, placing it on Ashcroft's protected side — while Miller's obscenity test itself becomes incoherent when applied to AI (no community, no author, audience/speaker roles inverted). The result: AI can produce photorealistic child sexual abuse imagery at industrial scale, and the exception machinery cannot reach it.
The Commercial Speech Exception
Central Hudson is the least broken exception because its threshold test — whether speech is misleading — evaluates from the listener's perspective rather than the speaker's mental state.8 However, determining whether AI output qualifies as "commercial speech" subject to Central Hudson or non-commercial speech subject to strict scrutiny is itself an unresolved classification problem.
03. The Collision Map
Speech classification is not a standalone doctrinal event. It is a classification decision with cascading effects across product liability, Section 230 immunity, and state regulatory authority — domains that evolved independently but intersect catastrophically when AI forces them into the same analytical space.
The Conway Line
Conway drew the most explicit judicial boundary between AI-as-product and AI-as-expression currently on record. Character A.I. is a product for product liability purposes “so far as Plaintiff’s claims arise from defects in the Character A.I. app rather than ideas or expressions within the app.”9 This separation, drawing on James v. Meow Media's principle that courts “separate the sense in which the tangible containers of ideas are products from their communicative element,”10 allowed product liability claims to proceed while leaving the speech question open.
The Social Media Adolescent Addiction MDL similarly found that alleged defects in platform functionalities were “analogizable to tangible personal property” rather than “akin to ideas, content, and free expression.”11 These holdings establish a pattern: courts can separate design-feature claims from expression claims, processing AI harm cases without reaching First Amendment questions.
The Double Shield
Volokh argues that Section 230 likely doesn't protect AI companies for content their systems compose — because the AI is generating content, not hosting third-party content.12 Bambauer and Surdeanu counter that LLMs can be understood "under CDA 230 as both a platform (for existing content within training databases) and a user (for newly generated content)."13
If AI output is classified as speech, this debate becomes less important — because the speech classification provides independent constitutional protection that Section 230 was never designed to provide. Section 230 blocks civil tort liability; the First Amendment blocks regulatory liability. Together, they create a double shield that leaves AI-generated harm with fewer liability pathways than either human speech or human-created products.
The Scale Amplification
Scale operates differently across domains: it's neutral in product liability (more plaintiffs but same per-claim framework), but amplifies Section 230's protective logic (platform immunity exists precisely because content policing at scale is impractical) and strengthens preemption arguments (AI companies operating in all 50 states simultaneously face compliance burdens that favor federal uniformity).
04. The Regulatory Void
When speech classification and federal preemption combine, the regulatory landscape does not become a total void — it becomes a reconfigured landscape where certain tool categories remain viable, others require constitutional redesign, and a few are structurally foreclosed. The shape depends critically on which scholarly framework prevails.
The Two-Constraint Squeeze
Constraint one: if AI output is speech, content-based regulations face strict scrutiny while content-neutral regulations face intermediate scrutiny. As Salib documents, the First Amendment “is much more deferential toward regulations aimed instead at some particular venue for speaking, some particular method of speaking, or some particular organizational or financial structure of the speakers” than toward regulations targeting "particular messages."14
Constraint two: federal preemption currently operates as potential rather than actual — no comprehensive federal AI framework exists. But the anticipation of federal action already shapes state legislative behavior.
Five Surviving Tool Categories
| Tool Category | Survives All Frameworks? | Constitutional Basis | Preemption Risk |
|---|---|---|---|
| Design-feature regulation | Yes | Content-neutral; product safety | Low |
| Transparency/disclosure | Yes | Consistent with listener rights | Moderate |
| Consumer protection | Mostly | Commercial speech doctrine | Moderate |
| Risk-based regulation | Partially | Effects-based, not content-based | High |
| Criminal sanctions | Narrowly | Conduct, not speech | Low |
Even Volokh, Lemley, and Henderson — the strongest proponents of AI speech protection — acknowledge that disclosure requirements "may be constitutional because of the special nature of AI speech."15 This makes disclosure the one regulatory tool that survives across every scholarly framework.
The Salib Alternative
If Salib's non-speech framework prevails, the regulatory toolkit expands dramatically: AI output occupies the First Amendment's penumbra rather than its core, subjecting regulation to intermediate scrutiny under O'Brien rather than strict scrutiny.16 Under this construction, output-content restrictions, sector-specific deployment requirements, and mandatory output filtering all become constitutionally viable with appropriate tailoring.
The Kaminski-Jones Construction
Kaminski and Jones offer a third path: rather than asking "is AI output speech?" they ask “how should law construct AI-generated content for regulatory purposes?” Their speech-at-scale construction shifts the regulatory question from restricting individual outputs to governing systemic risk — “from talking about individual First Amendment rights… to talking about speech systems and population-level risks.”17
05. The Framework Audit
Five frameworks dominate the current scholarly debate. Each offers a different answer to the threshold question — Is AI output "speech" within the meaning of the First Amendment? — because they begin from different premises about what the First Amendment protects and why.
Framework 1: Volokh, Lemley & Henderson — Listener Rights
The framework's most distinctive contribution: even if AI itself is not a speaker, users have a constitutionally protected interest in receiving AI-generated information. A law restricting AI output on topics like “abortion or gender identity or climate change” would “undermine users’ ability to hear arguments that they might find persuasive.”18
Blind spot: The framework assumes listeners can distinguish AI from human speech, treats information value as speaker-independent, and systematically avoids adversarial examples (AI-generated CSAM, bioweapon instructions, individualized propaganda). If listener rights protect AI political speech, the framework must explain why they do not equally protect these outputs.
Framework 2: Salib — Penumbral Non-Speech
Salib argues AI output is not itself protected speech but falls within the First Amendment's penumbra — receiving derivative protection because regulating it may incidentally burden actual human speech. Corporate speech doctrine cannot elevate AI output because that doctrine exists to prevent “otherwise-protected human speech from losing its protections upon contact with the corporate form” — and AI output is not “any human’s protected speech.”19
Blind spot: The penumbral zone may not be judicially manageable for AI, and the intermediate scrutiny standard contains a circularity: whether regulation is “related to the suppression of free speech” depends on whether AI output is speech — the very question the framework answers.
Framework 3: Kaminski & Jones — Legal Construction
The framework holds that law does not passively respond to technologies but actively constructs their legal meaning. The debate over whether AI output "is" speech is the wrong question — the question is which legal construction of AI advances the values we want the legal system to serve.20
Blind spot: The construction metaphor relocates rather than resolves the problem. Saying that law constructs AI output accurately describes the process but does not provide criteria for choosing among constructions.
Framework 4: Barrett — The Spectrum
Justice Barrett's Moody concurrence identifies a spectrum from human editorial curation to autonomous AI generation, suggesting that technology may “attenuate the connection” between content decisions and human constitutional rights.21 Conway's order directly engages this framework, finding it "instructive" for the AI-speech intersection.22
Blind spot: The fact-intensive, case-by-case approach is resource-regressive and scale-incompatible. Perpetual ambiguity provides de facto protection without constitutional warrant.
Framework 5: Austin & Levy — Speech Certainty
The principle that speech receives First Amendment protection only when the speaker knows the content of their speech at the time of publication. At the Founding, “there was no concept of speech uncharacterized by speech certainty.”23
Blind spot: Conflates technological limitation with constitutional command. Human speech is also probabilistic at many levels — extemporaneous speakers do not know in advance the exact words they will use.
| Framework | Core Strength | Primary Blind Spot | Unintended Consequence |
|---|---|---|---|
| Volokh/Lemley/Henderson | Grounded in established listener-rights doctrine | Assumes information value is speaker-independent | Protects harmful AI output while exception doctrines fail |
| Salib | Provides regulatory space through penumbral classification | Intermediate scrutiny test is circular | Permits viewpoint-discriminatory regulation under safety pretexts |
| Kaminski & Jones | Correctly identifies classification as constructed choice | Does not provide criteria for choosing among constructions | Framework logic can justify maximally speech-restrictive constructions |
| Barrett | Doctrinally conservative; avoids categorical errors | Resource-regressive, scale-incompatible | Perpetual ambiguity favors regulated entities |
| Austin & Levy | Genuinely novel structural principle | Conflates tech limitation with constitutional command | Removes protection from existing ML systems (recommendation engines) |
06. The Conway Counterfactual
Conway wrote: “The Court is not prepared to hold that Character A.I.’s output is speech.” We flipped that holding. We stipulated that Conway instead held: Character A.I.'s output IS protected speech under the First Amendment. Then we traced the consequences through four domains drawn from the full research portfolio.
Domain 1: Product Liability
The Conway Line collapses. If AI output is speech, the "communicative element" side of the product/expression divide expands to consume nearly everything the application produces. Product liability claims survive only for content-agnostic design features: age verification, session timers, addiction-prevention architecture. Claims targeting what the chatbot said are foreclosed. The parent whose child was harmed by chatbot conversation would need to prove a design defect (no age gate, no session limit) rather than pointing to what the chatbot actually said.
Domain 2: Section 230
The double shield emerges. Even if Section 230 is inapplicable to AI-generated content (following Volokh's analysis), speech classification provides independent constitutional protection. A plaintiff suing over harmful AI output would need to overcome both the argument that 230 immunizes the company as a platform and the argument that the First Amendment protects the output as speech.
Domain 3: State Regulation
Design-feature regulation survives (content-neutral). Transparency requirements partially survive. Consumer protection largely survives for commercial AI applications. Risk-based regulation partially survives for process requirements but not output-category restrictions. Criminal sanctions are substantially weakened — only strict liability offenses survive the intent void.
Domain 4: Federal Preemption
Speech classification creates preemption-like effects without requiring federal legislation. The First Amendment itself is federal law — when state regulation is challenged on First Amendment grounds, federal constitutional standards effectively preempt state regulatory authority through the Fourteenth Amendment's incorporation. States cannot serve as "laboratories of democracy" for AI regulation to the extent their experiments touch AI output.
The Classification Cascade
Each domain's consequences compound the next: product liability narrows → the double shield forms → state regulation faces strict scrutiny → preemption forecloses innovation around the constitutional floor. The cascade creates what we term a regulatory singularity — a point where multiple protective mechanisms converge to create near-complete insulation for AI output from legal accountability.
07. What the Five Angles See Together
The cross-angular integration produced findings that no single-domain analysis could have revealed.
Four Convergence Points
1. The Intent Void (all five angles). The single most reinforced finding. Every angle independently confirmed that the absence of speaker intent in AI systems creates a structural failure no doctrinal tool, scholarly framework, or regulatory mechanism has resolved. The exception-doctrine failures (Angle 1), the perverse incentives (Angle 2), the regulatory gaps (Angle 3), the scholarly impasse (Angle 4), and the accountability inversion (Angle 5) are all downstream consequences of a single architectural mismatch: First Amendment doctrine assumes all speakers have mental states.
2. Conway Pragmatism (four of five angles). Four angles independently validated Conway's "not prepared to hold" as the soundest judicial response to a question the existing framework cannot answer without breaking. None of the five scholarly frameworks accounts for the possibility that courts will simply route around the speech question rather than through it.
3. Scale as the Undertheorized Dimension (four angles). Scale appears in every angle but receives serious engagement in none of the scholarly frameworks. The debate is conducted at the level of individual instances while practical consequences operate at the level of systems producing billions of outputs.
4. Classification as System Event (four angles). The speech/non-speech classification has properties of a phase transition: above the threshold, the entire legal system reorganizes around one configuration; below it, around a completely different one.
The Accountability Inversion
Across all four consequence domains, the counterfactual produces a consistent structural pattern: the entities with the greatest capacity to cause harm receive the greatest legal protection, while the parties least able to prevent harm lose their most effective legal tools. Each protective mechanism was designed for a different vulnerable party — the unpopular speaker, the small platform, the innovative startup. When applied to AI at scale, these mechanisms protect the most powerful economic entities from accountability for their products' core function.
The constitutional framework for speech was calibrated for a world with three properties: speakers have intentions, outputs have human authors, and the volume of harmful speech is bounded by human capacity. AI removes all three calibration assumptions simultaneously.
08. What to Do About It
Three Structural Findings
The Intent Void — the structural problem from which most other findings derive. Protection without accountability is the mechanism behind every consequence identified in this project.
The Regulatory Singularity — product liability, Section 230, First Amendment protection, and preemption converge to create near-complete insulation. Visible only through cross-domain analysis.
Conway's Pragmatism — routing around the speech question rather than through it. Validated across four of five analytical angles as the soundest available judicial response.
Three Actionable Findings
A. The CSAM Gap Is the Most Immediate Legislative Opportunity. AI-generated child sexual abuse material falls through a specific doctrinal gap between Ferber's production-harm rationale and Ashcroft's virtual-child-pornography protection. This gap is uniquely actionable: the problem is specific, it generates near-universal political consensus, it doesn't depend on resolving the broader speech classification question, and it has existing legislative models (the PROTECT Act). This is the narrowest, most defensible legislative or amicus opportunity identified in the project.
B. Disclosure Requirements Are the Constitutional Common Ground. Across all five scholarly frameworks, disclosure and transparency requirements survive constitutional scrutiny. They are not just a surviving regulatory tool — they are a force multiplier for other surviving tools. When listeners know they're interacting with AI, counterspeech mechanisms and informed-listener assumptions are strengthened.
C. Conway's Line Is a Replicable Judicial Strategy. Other courts facing AI harm cases can adopt the same approach: decline to classify AI output as speech, separate design-feature claims from output-content claims, and allow product liability, consumer protection, negligence, and deceptive-practices claims to proceed. The strongest version: courts do not need to resolve the speech question to adjudicate AI harm, and attempting to resolve it produces cascading consequences the existing framework cannot absorb.
Emergent Research Questions
1. Can the regulatory singularity be disrupted without resolving the classification question — by institutionalizing Conway's avoidance of it?
2. Does the accountability inversion have historical precedents (corporate personhood, commercial speech expansion), and what doctrinal adaptations resolved them?
3. What happens at the Barrett spectrum's autonomous-generation endpoint — and can it be articulated in doctrinally rigorous terms?
09. Methodology & Evidence Base
Research Methodology
This project synthesizes primary legal sources across five domains: constitutional doctrine, statutory and regulatory frameworks, product liability law, platform immunity, and federal preemption. Each domain was researched independently before cross-domain integration. All source materials were collected directly from official repositories (court opinions, legislative databases, and published law review articles) and verified for accuracy before synthesis.
Source Base
| Domain | Content |
|---|---|
| AI Speech Scholarship | Law review articles and scholarly frameworks on AI speech classification |
| First Amendment Doctrine | Supreme Court opinions from Brandenburg (1969) through Moody (2024) |
| AI Litigation | Garcia v. Character Technologies and related AI harm cases |
| Product Liability | Platform liability cases and information-product doctrine |
| Federal Preemption | Preemption doctrine and state AI legislation |
Analytical Constructions
Several terms in this project — including "regulatory singularity," "accountability inversion," and "five questions disguised as one" — describe patterns identified through cross-domain analysis that do not appear in any single source. These constructions are explicitly flagged throughout to distinguish them from source-grounded findings.
10. Source Inventory
Primary Judicial Sources
| Source | Citation |
|---|---|
| Garcia v. Character Technologies | No. 6:24-cv-1903 (M.D. Fla. May 21, 2025) (Conway, J.) |
| Moody v. NetChoice | 603 U.S. ___ (2024) |
| Brandenburg v. Ohio | 395 U.S. 444 (1969) |
| Virginia v. Black | 538 U.S. 343 (2003) |
| Miller v. California | 413 U.S. 15 (1973) |
| United States v. Alvarez | 567 U.S. 709 (2012) |
| New York v. Ferber | 458 U.S. 747 (1982) |
| Ashcroft v. Free Speech Coalition | 535 U.S. 234 (2002) |
| Central Hudson v. Public Serv. Comm'n | 447 U.S. 557 (1980) |
| Giboney v. Empire Storage & Ice Co. | 336 U.S. 490 (1949) |
| Murphy v. NCAA | 584 U.S. ___ (2018) |
| In re Soc. Media Adolescent Addiction MDL | 702 F. Supp. 3d 809 (N.D. Cal. 2023) |
Scholarly Sources
| Source | Citation |
|---|---|
| Volokh, Lemley & Henderson | Freedom of Speech and AI Output, J. Free Speech L. (2023) |
| Henderson, Lemley & Hashimoto | Where's the Liability for Harmful AI Speech?, J. Free Speech L. (2023) |
| Salib | AI Outputs Are Not Protected Speech, 102 Wash. U. L. Rev. 83 (2024) |
| Austin & Levy | Speech Certainty, 77 Stan. L. Rev. 1 (2025) |
| Kaminski & Jones | Constructing AI Speech, Yale L.J. Forum (Apr. 2024) |
| Harvard Law Review | Beyond Section 230, 138 Harv. L. Rev. 1657 (2025) |
| Lidsky & Daves | Defamation by Hallucination, J. Free Speech L. (2025) |
| Lubin | On Software Bugs and Legal Bugs, 100 Ind. L.J. 1891 (2025) |