Policing the platforms; pulling at the thread

To moderate or not to moderate?  That is, and always will be, the dilemma of online services and publishers and it’s a question that has hung over the Internet from the very beginning, long before this current spate of hand-wringing and sophistry over InfoWars, fake news and the role the platform providers and publishers should play in wielding their ban hammers over propagandists and tin-foil turban wearing conspiracy nuts, or sensitive snowflakes and their Orwellian obsession with pronouns and trigger words.

Facebook and YouTube have banned Alex Jones from their domains. Twitter still tolerates his malignant theories about the Sandy Hook shootings.  Now Automattic, the commercial parent of WordPress.com, is getting guff for letting bloggers on its platform bloviate on whatever crosses their mind. Is the burden on the platforms to police the words, sounds, and pictures created, distributed, shared and consumed on their premises? Or is it on the authors and creators? Delete one malignant comment and the precedent is set that all comments are going to be reviewed and held to some standard of decency. Ignore all comments and content  and the trolls will infest the place like powdery mildew on a strawberry patch, spreading until the entire garden is rotten.

Our current crisis in whether or not to edit and restrict  online expression goes back to the mid-1990s and two contradictory court rulings from the dawn of the commercial Internet. Both cases involved precursors to the World Wide Web, two of the three bastions of the Walled Garden model of online services — Compuserve and Prodigy (AOL being the third).

At the heart of the matter is this simple question:  is an online service provider more like a bookstore or a book publisher? The owner of a bookstore stocks books, sorts them by categories, and sticks them on shelves. The bookstore’s management doesn’t review every word in every book, and therefore is not liable for a defamation claim made against them by someone who feels wronged by words in one of those books. The publisher of the book does review every word it publishes, copyediting, proofreading, and even fact-checking those words for errors, omissions, and libelous statements.

Another way to look at it is to consider the old telephone system of wires hanging from poles connected by switches routing calls from one person to another. If I commit a crime when I use the phone to threaten you, or spread lies about you, the telephone company isn’t liable because of a concept known as common carriage. Essentially common carriage means as long as the phone company allows every call to be completed, without filtering out hate speech of obvious calumnies, then the phone company is merely a pipe that treats everything that flows through it equally.

The distributor vs. editor delineation was at the heart of Cubby vs. Compuserve, a 1991 defamation suit brought by Cubby Inc. against Compuserve. Not being a lawyer, I’ll let Wikipedia summarize the facts of the mattter:

“Cubby alleged that CompuServe was the publisher of the defamatory statements. A “publisher,” in the context of defamation law, is one who publishes or otherwise republishes content. According to federal law and in agreement with New York state law, a publisher who repeats or republishes defamatory content has the same liability as the original publisher of the content.

“CompuServe maintained that it was merely a distributor of the published statements. Distributors of defamatory content can only be held liable if they knew, or had reason to know, of the defamatory nature of the content. The court held that “CompuServe has no more editorial control over such a publication [as Rumorville] than does a public library, book store, or newsstand, and it would be no more feasible for CompuServe to examine every publication it carries for potentially defamatory statements than it would be for any other distributor to do so.” 

The court ruled in favor of CompuServe and dismissed Cubby Inc.’s claim: “Cubby v. CompuServe treated internet intermediaries lacking editorial involvement as distributors, rather than publishers, in the context of defamation law. This decision removed any legal incentive for intermediaries to monitor or screen the content published on their domains.”

This was interpreted by the first online publishers to mean that moderation of user generated content on their platforms was not a good idea, so many publishers — particularly newspapers when they first went online — allowed readers to comment on stories with little or no oversight other than some obscenity filters to strike out the obvious slurs and swears. The belief back then was if an online publisher moderated any of the content contributed to its platform, then it had a responsibility to do so equally across all content, and that selective moderation was worst than no moderation at all.

A few years after Cubby v. CompuServe, the Long Island stock firm of Stratton Oakmont (founded by the “Wolf of Wall Street”, Jordan Belfort) sued Prodigy for comments made in a stock forum by a user who accused the firm of committing criminal acts in connection with an initial public offering of a stock the firm had underwritten. This time the court ruled against the online service provider, finding that its use of board moderators, obscenity filtering software, and by posting “content guidelines” meant it was indeed making a “.,..conscious choice, to gain the benefits of editorial control, [and] has opened it up to a greater liability to CompuServe and other computer networks that make no such choice.”[1]

Online publishers — who by that time were publishing on the World Wide Web from their own sites — simply stopped allowing readers to comment on stories, and in some cases discontinued the use of forums. The reason, I suspect, was less about the law and more about the cesspools that were forming below every online newspaper article. I remember the Cape Cod Times (my local paper, and where I began as a cub reporter in 1980) had a very lively comments section, where trolls would provide some spicy humor by provoking spectacular meltdowns from other commentators. Eventually the Times stopped story comments altogether.

The Communications Decency Act of 1996 attempted to clarify the emerging law by differentiating between a service provider and a content provider, indemnifying the former against claims, and relieving the latter for “good Samaritan” efforts to filter out bad content. But what a slippery slope remained.   Just as Congress tried to address the legal ambiguities, social media and blogging emerged, and before too long the ability to publish and distribute moved from the online walled gardens and the first professional publishers’ dot.coms to a chaotic environment where anyone could easily open an anonymous blog and spew whatever crap they wanted to.

The tech industry’s reaction was to back off and let the people have their unbridled say. The cherished First Amendment principles of free speech, combined with the Internet pioneers’ hippy-dippy, Stewart Brand “information wants to be free” ethos, made it easy for Facebook, WordPress.com, Blogger, Livejournal, and Twitter to step away from their responsibilities as editors and gatekeepers and let the trolling happen unabated.

I suspect the truth behind the decision of social networks to let their platforms turn into soapboxes for fake news, conspiracy wingnuts, and vicious troll wars like Gamergate wasn’t driven by some Jeffersonian principle of free speech, but business models that couldn’t support the cost of the army of moderators that would be needed to keep things civil and honest. Moderation is expensive, big publishers like the New York Times employ large staffs who do nothing all day but review comments submitted to the footers of their articles. Facebook has moderation farms in Ireland. One can only imagine the volume of submissions, complaints, DMCA take down notices, subpoenas, and  other content-related chores that flow into a big online platform today.

Free speech is perhaps the most ongoing and challenging aspect of constitutional law. It was written into the Constitution in an era when dissidents like Thomas Paine published pamphlets anonymously. It was a time when a teenaged Benjamin Franklin would twist the tail of Boston’s colonial governors by writing scandalous articles in his brother’s newspaper, The New-England Courant, under the pen name of Silence Dogood.

Today, in light of how adroitly Russian propagandists gamed our social networks and tainted a contentious election (elections being sacred in a democracy); in light of the undeniable horrors of the Holocaust, school shootings, human trafficking and child abuse; in light of  the chaos and information overload, the granting of powerful publishing tools to the masses, the fragmentation of shared media, the rise of identity politics and the media’s amplification of controversy into scandals; it’s no surprise that Silicon Valley hears the wolf knocking on its doors and is taking belated action to clean house. But the chain of ethical dominoes set off by Alex Jones being sent packing by Facebook and YouTube re falling faster now and should put us all on notice that our right to express ourselves is under threat, and unless you take the type of position the ACLU took when it defended the right of American Nazis to march in Skokie, Illinois – that all people deserve the right to free speech, to free assembly, then you’re selectively imposing a standard of decency which verges on censorship.

The activist tactic of boycotting objectionable voices by pressuring their advertisers is one way to silence them.  Bereaved parents of murdered first graders have the right to sue Mr. Jones for defaming them with his bizarre theories.  All of have the right to defend our intellectual property and demand infringers stop appropriating it.  But what about our right to anonymity?  Stewart Brand and the W.E.L.L: — that legendary, incandescent early online community — adopted the simple philosophy of “You own your words.” But when those words are masked behind fake accounts, when their source is concealed and VPNed, when those words assault people and result in doxing or swatting or death threats, then are they still free speech or something malignant that abuses the good will of the commons, where all of us are entitled to our soapbox?

I think anonymity is a powerful tool for whistleblowers, social workers, and a variety of voices in peril of being clapped into chains by regimes determined to stamp out dissent. One can only imagine the outrage in colonial America when Mssrs. Paine and Franklin were anonymously twisting the mad King’s tail. One has to admire the courage of the samizdat underground press in the Soviet Union, the power of social media in the Arab Spring.

I believe free speech is a responsibility more than a right, and that anonymous speech — except in a few cases — is cowardice. I don’t believe “AI” or the “blockchain” are the solution, but I do support Po.et and other efforts to stamp content with a verified signifier of its authenticity. Technology isn’t the solution. Legislation and regulation are always chasing yesterday’s problems and can’t keep up with the new ones emerging thanks to Moore’s Law and all the other underpinnings of technology driven change. As someone on the wrong end of a British libel court’s declaration that Forbes had to insure that an offending article about a Russian kleptocrat not be readable in the U.K., I can attest at how boneheaded the courts can be about the realities of managing content on the Internet. Governments tend to charge blindly into the fray, with dumb laws like the EU’s “right to be forgotten” which is censorship in the name of privacy.  Just look at Sisyphean  idiocy of the Turks and their attempts to police online speech by making it a crime to criticize Erdogan or Mustafa Kamal Ataturk.

What I do know is that it’s up to the authors to protect their own work,to be prepared to self-host, self-publish, self-fact check and be prepared defend that work against its critics and enemies. When the “Suede/Denim Thought Police” come knock knocking on your door, are you ready to go it alone and own your words and defend them against some policy coming out of a Facebook conference room or a Washington D.C. courtroom? Alex Jones may be entertaining, a total loon, a crazy person with a microphone — but it’s on him now to take responsibility for how his stuff is distributed, and that means building an audience without relying on the big Borg-ish networks like Google, Microsoft, Amazon, Facebook, Instagram, and Twitter. Those platforms are bad, indifferent, privacy-eating monsters with no moral compass other than shareholder value. There’s no freedom of speech to be found on a commercial platform, no assurances that their new army of moderators will do the right thing other than shield their employers from the courtroom. Dries Buytaert has it right — the closing of the Open Web and the sudden shift to the big platforms is a threat that can only be thwarted by using tools and technology that is open, out of the control of corporations and governments, and which will protect an author’s rights because the author cares enough to own their own words.

In 1722, when Benjamin Franklin’s brother James was jailed on libel charges for publishing Silence Dogood’s satirical letters, Franklin quoted Cato: “Without freedom of thought there can be no such thing as wisdom and no such thing as public liberty without freedom of speech.”