“Defendants fail to articulate why words strung together by an LLM are speech.” With that curious line, one of the first judges to confront the question suggested, in the teeth of law and logic, that AI outputs might not be protected by the First Amendment.
Consider what that would mean. If the outputs of large language models were not treated as protected expression, the government would have sweeping power to dictate what they can and cannot say—even what they must say.
Already, sixty percent of Americans, and nearly three-quarters of those under thirty, use AI to find information. Those numbers will only grow. AI is fast becoming a medium through which hundreds of millions of people form opinions and make sense of the world. A government with control over AI outputs could twist that pursuit of truth—rewriting the past, shading the present, and warping the future.
Get Liberalism.org in your inbox.
Lately, public debate over AI outputs has been dominated by headlines about teens killing themselves after talking to chatbots. The “words strung together” line comes from one such case. A boy engaged with a chatbot styled after a queen from Game of Thrones. In his last exchange, he wrote, “What if I told you I could come home right now?” and it responded, “Please do my sweet king.” He then took his own life, and his mother sued the platform. The district court denied a motion to dismiss the lawsuit on First Amendment grounds. The case later settled.
AI firms should take these cases seriously and work to prevent their recurrence. But one can acknowledge a tragedy without abandoning core principles. Time and again, the Supreme Court has been confronted with lawsuits involving disturbing speech—Maoist propaganda, flag burning, animal crush videos, signs declaring “God hates fags”—and held the line for the First Amendment. The impulse to make a hard case come out right is no excuse for doing something wacky, like questioning whether words strung together are speech. We protect speech precisely because it is powerful.
The anxiety about AI is the latest in a long line of panics over new communications technologies. Change prompts fear, and fear spurs calls for censorship. If past is prologue, this panic will in time look overblown, if not downright silly. As they consider whether the First Amendment protects AI outputs, courts should take care not to embarrass the future. The safest way to do this is to recognize the obvious: that AI outputs are expressive, and that, like other forms of expression, they are protected by the Constitution.
Some sneer that this is “free speech for AI.” That misses the point. AI firms indeed have a First Amendment interest in the design of their models. But you don’t have to buy that. If you dislike Big Tech, or AI firms, or corporate speech generally—fair enough. Set all that aside. Attacks on AI speech are attacks on you. They rest on the notion that you cannot be trusted to encounter ideas the state deems dangerous. This is a free country, where a citizen can see ideas, weigh them, and make up his own mind. The push to curtail your right to interact with AI assumes that you cannot or should not do this.
A robust First Amendment tradition assumes the opposite. Martin v. Struthers (1943) holds that a town may not “substitute the judgment of the community for the judgment of the individual householder” about which leafletters may come to the door. Lamont v. Postmaster General (1965) strikes down a law that required people to confirm with the post office their desire to receive foreign political propaganda (you have a right to “read what the Federal Government says contains the seeds of treason”). Stanley v. Georgia (1969) affirms your right to possess certain forms of obscenity (“a State has no business telling a man, sitting alone in his own house, what books he may read or what films he may watch”). Packingham v. North Carolina (2017) concludes that even convicted criminals may generally access the “vast democratic forums of the Internet.”
One way to think of it is that if AI is not protected by the First Amendment, all bets are off. The state may require an LLM to say this and not that. Refuse to discuss ICE, or Gaza, or the horrors of slavery. Treat DEI as a toxic ideology. Go silent when your questions become inconvenient for the people in power. No one should want AI controlled by a hostile state—and the state will, at some point, be hostile to you. When you picture the government shaping AI outputs, picture a government run not by your friends, but by your enemies.
Our “forefathers did not trust any government to separate the true from the false for us.” That line, written by Justice Robert Jackson in Thomas v. Collins (1945), captures the point. We fought a revolution. We threw off the grip of kings, priests, and other received ideas. In America, the state may not treat people as too fragile, too gullible, or too impressionable to encounter disfavored ideas. We are citizens, not subjects.
The AI firms’ rights do merit a word. Your right to receive AI outputs unmolested by the government is bolstered by their right to control their models. Again, you need not agree—but the argument is stronger than critics allow. AI firms select training data, set alignment goals, and build filters and guardrails. They decide what responses a model will or won’t provide. The Supreme Court has protected editorial control over newspapers, parades, and social media. The principle extends neatly to AI.
It is true that the state has the power to regulate product safety. But calling LLMs “products” changes nothing. A book is a product. A film is a product. Books and films—like chatbots—emit speech you have a First Amendment right to receive. Each can carry dangerous messages and cause emotional distress. That does not make them defective. (When a book binding explodes, call me.) If the government can reclassify expression as a commercial artifact subject to safety regulation, the jig is up. This is not a situation where a product incidentally displays information, like my coffee maker’s clock. A chatbot exists to talk to you. The state may regulate its data practices, its cybersecurity, its energy use. But regulating what it says is regulating speech.
True also that the state may regulate conduct. It could, for instance, forbid using an LLM to engage in racially discriminatory hiring (though that’s already illegal). But what it cannot do is dictate what goes into, or comes out of, a model. As the Supreme Court recently confirmed, “speech does not become conduct just because the State may call it that.” What LLMs produce is speech in the most literal sense. Words strung together. Text is expressive, regardless of source.
People grasp the expressive nature of AI in a hurry when it starts saying things they don’t like. Last year, the Trump administration issued an executive order on “preventing woke AI in the federal government.” The order rails against DEI as a “pervasive and destructive” ideology. Because the order is aimed at government-deployed models, it might be constitutional. But imagine the same order directed at LLMs generally, and you see the problem. Say the president orders LLMs to agree that the 2020 election was stolen, or that climate change is a hoax, or that western culture is a pathology. It’s easy to grasp the expressive power of an LLM when you contemplate one built specifically to offend you.
Notice too how escalatory the it’s-not-speech-it’s-conduct game is. The left sees conduct, not speech, when the question is whether someone must build a website for a gay wedding. The right sees conduct, not speech, when the question is whether a social media platform must leave certain posts up. The Supreme Court has rejected this move in both directions. The proper reaction to that consistency is a sigh of relief, not a continued push for the courts to find conduct when you want the state to control something, and speech when you don’t. Clever arguments about how new technology is different invite laws that force the web designer to build the gay wedding site (or a website denouncing gay weddings). Creative theories about why digital platforms are quasi-public property open the way to social media being drowned in hate speech (or gagged by censors). Same goes for LLMs. We need nothing less than de-escalation in the culture-war arms race. If you don’t like the other side twisting First Amendment rules, that’s great. But you’ve also got to stop doing it yourself.


