by Paul Alan Levy
The New York Times’ online edition carries a column by Stanley Fish, touting a book of essays by several law professors who, according to Fish, decry the ease with which offensive accusations and opinions can be published online and call for new limits on this freedom of expression. To hear Fish tell it, “The answer given by the authors in this volume involves the repeal or modification of Section 230 of the Communications Decency Act” coupled with a drastic curtailment of the protections for the right to speak anonymously online. Not having read the whole book yet, I can't vouch for the accuracy of Fish’s summary.
According to Fish, “Saul Levmore (Nussbaum’s co-editor) suggests that immunity might be conditioned on the willingness of a provider either to take down a message after notice of its falsity or defamatory character has been given, or 'to enforce non-anonymity' and thus open the way for an injured party to seek redress. The law, writes Anupam Chander, 'should allow the individual to find information to lead her to the person who committed the privacy invasion.'”
If Fish accurately portrays their essays as using these theories to curtail section 230, then Saul Levmore and Anupam Chander, and the other authors to whom Fish attributes the desire to wipe out section 230, haven’t taken a careful look at what the law is now. Fish plainly hasn't.
Current Law Allows Subpoenas to Identify Abusive Speakers
Under current law, if actionable expression is communicated online, the victim of the statutory, tort or contract violation can sue the author for that expression, but can no more sue the host of the web site, or the provider of the email service, than he could sue the postal service for carrying a defamatory book or newspaper, or sue a library for lending such a book out. Moreover, even if the name of the author is not provided with the expression, generally speaking the host of a web site that contains offending content (or an email provider) maintains at least for a period of time the data that is needed to identify the author.
That information can be subpoenaed from the host. And such a subpoena can be enforced by anybody who has a substantial claim of defamation or other actionable content. That is, they will succeed in the subpoena proceeding so long as they can identify the allegedly defamatory words about them, the words are actionable statements of facts and not just opinions, they have evidence of falsity and of damage, and there is no other reason to withhold identification, such as a real risk of extra-judicial retaliation. That is the law that we have managed to create in state after state since the groundbreaking “Dendrite” decision in New Jersey ten years ago (adopted just this week by an appellate court in Pennsylvania).
Sometimes this means finding counsel in the jurisdiction where the host is located (for example, Google and Yahoo! will respond to subpoenas only in California or other states where they have offices). One "highlighted" commenter on Fish's post complained about having to pursue his subpoena to Google in California court. The complaint is nonsense. It is not hard to find local counsel to pursue a subpoena in Santa Clara County, California, and to sign a response to a motion to quash prepared by lead counsel elsewhere, if such a motion is filed (note that the Doe defendant has to find a lawyer in California, too). Prosecuting a libel case generally entails considerable attorney time and therefore expense. The cost of identifying the defendant at the outset of the litigation is a drop in the bucket compared to litigating one of these cases to judgment.
So, there is no need to reduce section 230 immunity to accomplish the objective of taking away anonymity when the target of expression has a valid legal claim to pursue.
The Cost of Eliminating Anonymity Whenever Speech Is Challenged
Moreover, the elimination of anonymity on demand – taking it away just because the target of the speech objects to the speech – could have a terrible chilling effect on much valuable speech that benefits society. If the law were that anonymity could be too easily removed, by someone who does not have both a genuine case to pursue and the intention of pursuing litigation through trial if necessary, the result would be a serious chilling effect that would deter much that is valuable.
Do people sometimes or even often say things anonymously because it spares them embarrassment that they ought to feel about saying vile or damaging things? Sure they do, but people also say things anonymously for a variety of socially desirable reasons. They may be blowing the whistle on misconduct in which the public has an interest, and not want to take the risk of obloquy in their specific communities, or of economic retaliation if they earn their living in a situation where they can easily be replaced and their supervisors (or their customers) do not respect diversity of opinion. Or they may want their views to be taken for what they are worth on the merits without being overestimated or discounted.
This latter particular reason for anonymity irks Fish, who argues that Justice Stevens went astray in his majority opinion in McIntyre v. Ohio Elections Commission by separating the content of speech from the identity of its author. Fish claims, “it is not true that a text’s meaning is the same whether or not its source is known,” because the identity of the author can help him assess the message. But the problem is not with Justice Stevens’ reasoning, but rather with the fact that Fish has decided not to take notice of the fact that the Court addressed this issue elsewhere in its opinion. The First Amendment entitles the author to control content of her expression, and to the extent that the identity of the author is relevant to the meaning, the decision whether to include that datum to permit evaluation is for the author to make — as the Stevens opinion states, “an author's decision to remain anonymous, like other decisions concerning omissions or additions to the content of a publication, is an aspect of the freedom of speech protected by the First Amendment.”
Is it crucial for Fish to know the identity of the author of every statement that he reads? Perhaps that is his view, but that is only one literary theory; other theorists would say that it is a fundamental error to try to judge a text according to what we know about the author. Fish is free to discount a statement because it is made anonymously. And, in fact, many readers will do exactly that. The author has a decision to make – should I take the risk that my perfectly valid comments will be discounted because of their anonymity? The First Amendment gives the reader the ability to make such choices, but not the right to insist that the author include the information. Indeed, in making assumptions about the harms caused by anonymous speech, it is not only Fish but the authors whose book he touts who seem to just assume that reputational or other harm will result despite the fact that any sensible reader may discount anonymous criticisms somewhat for the very reason that they are anonymous.
The Cost of Denying Host Immunity After It Receives "Notice of Falsity"
As for the suggestion of taking away section 230 immunity on “notice of falsity,” there is another term for that proposal – the heckler’s veto. If the provider of a forum is potentially liable once it learns that someone is claiming falsity (that is, notice of falsity, not adjudication of falsity) then the easy, cheap way out is just to remove the statement. It is expensive not just to defend a given statement in court, but even to investigate the statement and evaluate the possibility of being held liable. The expense of sufficient evaluation of the risk of leaving the comment in place is much greater than the revenue the forum provider can possibly earn by showing ads to the Internet users looking at that page. So if the Levmore proposal (as summarized by Fish) is adopted, the consequence will be censorship simply by a threat of litigation.
And section 230 does not just protect the individual or company that hosts a particular messaging facility. That host gets bandwidth by renting server space from a larger company, which in turn may get its hosting capacity from an even larger company, leading eventually to the Internet’s backbone providers. Which of these should be subject to liability on the Fish model? Especially in the intellectual property area, which is an exception to section 230's immunity — an unjustified exception perhaps — we have seen claimants with frivolous IP claims go up the ladder after the initial host refuses to take down challenged matter, threatening litigation against each in turn. So if section 230's protections were eliminated across the board, or for claims of defamation, or of bullying, or of pure vileness or vulgar language, then the heckler’s veto might have the chance to operate at each level of hosting. Only if every host involved up the line decided to take a chance on not financing the expense of evaluating the content at issue, or not having to finance the defense of litigation, or not being held liable, would the speech remain online. Is the price of that heckler’s veto worth paying?
By ignoring the good that the section 230 does, just as they ignore the benefits that anonymity can provide, Fish and, apparently, the essayists he touts avoid any need to consider the magnitude of the harms that section 230 creates against the social benefits that it provides for the system of online free speech. In that regard, I wonder whether Fish and friends are overstating the harm caused by the hosting of vile statements.
Just How Much Harm Does Offensive Online Speech Cause — And What Incentives do Hosts Have?
Are there, as the title of one of the essays has it, “cybercesspools”? Certainly there are, but who spends their time reading there? Who takes what is said there as gospel, or even takes it seriously? How much real harm is done by the content that can be found there? When my friends or potential customers see that some anonymous individual, using a pseudonym playing on the name of a cartoon character or a rap singer, has castigated me online, does it really affect my reputation or drive them away from my business? Yes, it may upset me that there are online locations where foul statements are made about me, but do I really care about the opinions of those that read those statements? Is the elimination of my hurt feelings worth the cost that would be paid by the system of online free speech through the adoption of Fish’s proposals? That is a question that Fish need not address because he does not acknowledge that there are such costs.
Moreover, the Fish line of argument tends to ignore the price that is paid by hosts that choose to do absolutely nothing about the nasty quality of expression that is placed on their sites — they are likely to lose visitors that they care about. Dan Gillmor’s response to Fish has this right — if "anonymous sleaze" is confined to the cesspools it will largely be ignored, and hosts make choices that determine whether their sites are such cesspools. Gillmor particularly recommends requiring registration and the selection of a unique pseudonym for posting, so that any given poster can become accountable for the persona created by posting on that specific forum. Over at Techdirt, Mike Masnick has a sensitive discussion of the impact of anonymous commenting on his site's community, and how he deals with it.
One nice example is given in the comments posted to the Fish article. The New York Times devotes staff time to moderating comments before they appear on its blogs; the Washington Post does not (although it uses software to filter out nasty language and repeat posts from the same user). The result is that comments at the Times tend to be much more thoughtful – and hence worth reading – while comments on the Post’s political blogs tend to be much more partisan and much more full of rant. So the result is that many of us are likely to read the Post's articles and columns but ignore the comments. The Post is apparently aware of this problem: according to its ombudsman, the Post is developing a new system whereby comments will be tiered, so that commenters who misbehave (by some mechanical standard) and do not identify themselves will be confined to a tier that readers must deliberately choose to access.
Many hosts — certainly the ones that we have tended to represent at Public Citizen — regularly vet their sites for foul language, for spam, and for ad hominem attacks that are unrelated to the topic under discussion. Hosts that do not monitor statements before they are posted commonly, and that do not take it upon themselves to decide whether charges and counter charges using their facilities are true or false, nevertheless make use of “abuse” buttons that readers can click to call attention to a particular post that goes beyond the community guidelines to which the hosts choose to hold their users. If they don’t, they know that the ordinary consumers whom they hope to serve will find it too uncomfortable to spend time on their sites, and their sites will lose social utility (and, perhaps more cynically, they know they will lose page views that help their ad revenue). Fish, by contrast, just assumes that more controversy and more vile comments bring in more viewers.
These are all the sorts of editorial choices that Congress decided to protect, and indeed to encourage, in adopting section 230. Because Fish does not truly engage with the considerations at issue, his column does not provide even a useful starting point for a debate about whether section 230 needs a change.
Comments