The NY Post recently published a bombshell story that detailed a web of emails and transactions allegedly involving Hunter Biden, Joe Biden, China, Ukraine, and others. When the paper tried to publish the story on Twitter and Facebook, the tech-giants blocked it from doing so and prevented others from disseminating the story. In deciding to do so, the tech-giants likely relied on 47 USC Section 230. However, given the impact, and the arbitrary nature of, such decisions, there are compelling reasons to revise Section 230 so as to prevent abuse of its very provisions.
The law, part of the Communications Decency Act, serves to shield companies like Twitter and Facebook from liability stemming from information published by others. Specifically, section (c) of the law states:
(c) Protection for “Good Samaritan” blocking and screening of offensive material
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of-
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
In other words, according to this law, a provider of an interactive computer service, such as Twitter of Facebook, is not considered to be the publisher of any information made by another and cannot be held liable if it in good faith restricts access to, or making material unavailable, that the provider deems obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.
Clearly, the language of this law is quite broad and gives the provider significant leeway and discretion. For example, the term “otherwise objectionable” is broad and subject to interpretation, as is the term “good faith.” In the case involving the NY Post, both Twitter and Facebook decided to ban the publication and the dissemination of the story involving Hunter and Joe Biden. Clearly, the story was damaging to presidential candidate, Joe Biden, and his campaign, and some viewed the decision to ban its publication by these tech-giants as election interference, alleging that their content moderation decisions amounted to in-kind campaign contributions to former Vice President Joe Biden.
The trouble stems from the apparent free reign given to these social media giants by this law. As the law, in its current form, shields them from civil liability, there is nothing compelling such companies to change their methods. In essence, they are free to selectively ban whatever content they wish without any significant risk. For all intents and purposes, the law, in its current form, seemingly allows the CEO’s of these social media companies to use its very terms in an inappropriate manner and/or for improper purposes.
Just recently, the Supreme Court declined to review the scope of Section 230 in the case of MalwareBytes Inc. v. Enigma Software Group USA, LLC. While the court decided not to hear the case, Justice Clarence Thomas issued a statement that was quite critical of the specific law. As reported by Lawfare:
Justice Clarence Thomas released a statement agreeing with the court’s decision to not hear the Section 230 case, known as MalwareBytes Inc. v. Enigma Software Group USA, LLC. But he argued that courts have interpreted the provision to confer far more immunity to online platforms than the law requires, and therefore that the Supreme Court should reexamine the issue when a better case presents itself.
To Thomas, the real scope of Section 230 is quite modest: Section 230(c)(2) holds that internet platforms cannot be held liable for good-faith efforts to remove or restrict illegal content from third parties, and Section 230(c)(1) means that they’re also not liable for illegal content unknowingly left up on their sites. But Thomas gives several examples of what he sees as lower courts unduly expanding the provisions to confer “sweeping protection to Internet platforms.” He claims that courts have wrongly given immunity to companies that knowingly distributed illegal content; that courts have given companies immunity from liability for their own published content, even though the law covers third-party content; that by construing Section 230 to protect any form of content moderation, courts have encouraged racially discriminatory practices and that judges have given internet platforms the benefit of the doubt even when platforms are complicit in human trafficking and terrorism. Writing that “other examples abound” of lower courts finding broad immunity for platforms in the “policy” and “purpose” of Section 230, Thomas concludes that the Supreme Court should restore the law’s narrow scope in an appropriate case.
Given the breadth of the law, some in Congress have suggested that Section 230 be revised, re-written, or eliminated altogether. The idea of revising or re-writing the law should garner bipartisan support, as both major parties should oppose the seemingly arbitrary, politically-motivated, and damaging abuses of this law by the CEOs of some of the largest social media giants. The decision to block and/or prohibit the publication of the recent NY Post story is a tragic example of such abuse.
However, the decision to eliminate the law should be more heavily scrutinized, as the risks of doing so could be more significant. Specifically, if Section 230 is eliminated, for example, providers like Twitter and Facebook would lose the protection from civil liability that they currently enjoy under the law. This could force the CEO’s of such companies to seriously consider whether or not to ban/block certain material on the grounds that it could be deemed improper and/or illegal, thereby triggering liability on their part. However, were this to happen, it is likely that providers would severely limit the type of information that they allowed. In other words, if such providers were responsible for the information submitted by others, they could, in all likelihood, be forced to “police” their users and to prohibit/ban a great deal of information for fear of being sued.
Herein lies the problem. On the one hand, social media giants should not be permitted to utilize this law to further an improper/illegal motive without the risk of liability. For example, Twitter and Facebook should not be permitted to block their users from publishing and disseminating a story merely because the story could negatively impact their preferred political candidate. Whether or not such conduct constitutes election interference will likely be determined at a future time and date. On the other hand, care must be taken so that the law is not written in such a way where it will lead such providers to seriously limit the type of permissible content and the free exchange of ideas.
Notwithstanding these concerns, it is clear that something must be done in light of the blatant act of censorship against the NY Post (and others) by Twitter and Facebook. It is unclear whether the changes to this law should come from the Supreme Court, which could narrow the scope of the law and the protections that it affords to social media giants, or whether Congress should revise/re-write the law. Regardless, the law must be changed. By unilaterally deciding to block the recent NY Post story, Twitter and Facebook significantly escalated their blatant acts of censorship. Such decisions can have very serious consequences and must be addressed as soon as possible.
This article is intended for informational purposes only. It is not intended to solicit business or to provide legal advice. You should not take, or refrain from taking, any legal action based upon the information contained in this article without first seeking professional counsel.
Mr. Hakim is a writer, commentator and a practicing attorney. His articles have been published in The Washington Examiner, The Daily Caller, The Federalist, The Epoch Times, The Western Journal, American Thinker and other online publications.