Are Social Media Companies Going to Lose Legal Protections?

By Bruce Harpham Thursday, October 22, 2020

The reputation of social media companies, especially the largest firms, has sunk to a new low in 2020. The US government is starting to consider whether these large firms should continue to receive long-standing legal protection.

Ajit Pai, chairman of the Federal Communications Commission, has announced plans to review section 230 of the Communications Decency Act. Ultimately, this decision can be traced back to President Trump’s Executive Order on Preventing Online Censorship from May 2020. Since then, there have been disputes on whether it is permissible to reinterpret Section 230. Nonetheless, the proposed change has caused uncertainty for Internet firms, which might have dramatically changed their operations if section 230 was interpreted significantly differently.
ll tickets to large and small events. As more conferences and entertainment events shift online, Zoom is jumping on the virtual event bandwagon.

Why Does Section 230 Matter to Internet Companies So Much?

Section 230 of the 1996 Communications Decency Act is generally considered one of the most critical safeguards for Internet companies. The provision of the law states:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In practical terms, this provision means that Internet companies are usually not held liable for content users’ posts on their platforms. The underlying idea dates back to telephone regulations. Companies like AT&T were not held liable for the content of telephone conversations because making them liable would require a massive level of surveillance.

Twitter and Facebook Legal Woes — Section 230

Twitter appears poised to suffer a great deal from the loss of section 230 protections. A formal end to liability protection, in of itself, would not harm the company. However, the Trump administration has repeatedly indicated displeasure at Twitter’s efforts to manage content on its platform. It is reasonable to assume that government lawyers could be instructed to take action against Twitter if section 230 protection is removed or weakened.

Even with section 230 protection, Facebook is already facing considerable legal challenges. Early in 2020, the company paid $550 million to settle a privacy lawsuit. In July 2020, Facebook agreed to pay over $600 million for violating an Illinois lawsuit. While substantial, these lawsuits are manageable for a company that earned over $18 billion in 2019.

Without section 230 liability protection, Facebook might face a crippling series of lawsuits related to its platform's user content. In a bid to minimize such a blow, the company has recently stepped up its self-policing effort.

On October 12, 2020, Facebook announced that it would remove Holocaust denial content from its platform. While a positive move, it is an area that other tech firms have struggled with for years.

As recently as 2017, Amazon faced backlash for offering Holocaust denial books for sale on its platform. Given the significance of user-generated content on Amazon — millions of customer reviews — the end of section 230, protections might hurt Amazon’s business model as well.

Losing Section 230 Protection Could Be Substantial

Social media companies generally take a light touch to manage content on their platform. The New Yorker has recently reported that Facebook’s efforts at content moderation have largely been watered down over the past few years.

Without liability protection from user content, social media companies would have to completely reinvent their business model. For instance, they might decide to automatically remove content from viewing once it is reported as objectionable. Such rapid content removal could discourage users from using the platform. If users leave the platform in large numbers, advertising revenue may decline.

Reduced revenue is just one side of the coin. On the expense side, proactive content management could require a vast expansion of staff to review and delete content. A different approach would involve investing in more sophisticated technology to manage content. In either scenario, these added costs will add up. In 2017, Facebook announced that it planned to hire 1000 advertising moderation staff. Similar numbers may be required to monitor user-generated content.

The End of Social Media Self-Policing

For years, social media companies have used their terms of service and community standards to set standards. For example, Twitter’s “Twitter Rules” lists several types of content that are forbidden, including “You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes” and has prohibitions against hate speech.

If statements from the FCC are believed, social media’s time as a specially protected platform may be over. “But [social media firms] do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters,” said the FCC Chairman on October 15.

About the Author


Headshot of Bruce Harpham

Bruce Harpham is an author and marketing consultant based in Canada. His first book "Project Managers At Work" shared real-world success lessons from NASA, Google, and other organizations. His articles have been published in CIO.com, InfoWorld, Canadian Business, and other organizations. Visit BruceHarpham.com for articles, interviews with tech leaders, and updates on future books.

Related Articles