成人VR视频

Protecting Freedom of Expression Online

In this piece, Rachel Zuroff, BCL鈥16, compares online intermediary immunity laws in the context of social media. Drawing on the recent Capitol Hill riots, she highlights tensions between free speech and regulation of democratic discourse, and discusses policy reform opportunities in Canada.

Questions around freedom of expression are once again in the air. While concern around the Internet鈥檚 role in the spread of disinformation and intolerance rises, so too do worries about how to maintain digital spaces for the free and open exchange of ideas. Within this context, countries have begun to re-think how they regulate online speech, including through mechanisms such as the principle of online intermediary immunity, arguably one of the main principles that has allowed the Internet to flourish as vibrantly as it has.

What is online intermediary immunity?

Laws that enact online intermediary immunity provide Internet platforms (e.g., Facebook, Twitter, YouTube) with legal protections against liability for content generated by third-party users.

Simply put, if a user posts illegal content, the host (i.e., intermediary) may not be held liable. An intermediary is understood as any actor other than the content creator. This includes large platforms such as Twitter where, for example, if a user posts an incendiary call to violence, Twitter may not be held liable for that post. It also holds for smaller platforms, such as a personal blog, where the blogger is protected from being held liable for comments left by readers. The same is true for the computer servers hosting the content.

These laws have multiple policy goals, ranging from promoting free expression and information access, to encouraging economic growth and technical innovation. But balancing these objectives against the risk of harm has proven complicated, as seen in debates about how to prevent online election disinformation campaigns, hate speech, and threats of violence.

There is also a growing public perception that large-scale Internet platforms need to be held accountable for the harms they enable. With the European Union reforming its major legislation on Internet regulation, the ongoing debate in the United States regarding similar reforms, and the recent January 6 attack on Capitol Hill, it is a propitious time to examine how different jurisdictions implement online intermediary liability laws and what that means for ensuring that the Web continues to allow deliberative democracy and civic participation.

The United States

Traditionally, the United States has provided some of the most rigorous protections for online intermediaries under section 230 of the (CDA) [.pdf], which bars platforms from being treated as the 鈥減ublisher or speaker鈥 of third-party content and establishes that platforms moderating content in good faith maintain their immunity from liability. However, there are increasing calls on both the left and right for this to change.

Republican Senator Josh Hawley of Missouri introduced two pieces of legislation in 2020 and 2019 respectively 鈥 the and the 鈥 to undercut the liability protections provided for in section 230 CDA. If passed, the Limiting Section 230 Immunity to Good Samaritans Act would limit liability protections to platforms that use value-neutral content moderation practices, meaning that content would have to be moderated with absolute neutrality, free from any set of values, to be protected. However, this is an unrealistic standard, given that all editorial decisions involve choices based on value, be it merely a question of how to sort that content (e.g., chronologically, alphabetically, etc.) or the editor鈥檚 own personal interests and taste. The Ending Support for Internet Censorship Act also seeks to remove liability protections for platforms that curate political information, the vagueness of which risks aggressively demotivating platforms from hosting politically sensitive conversations and chilling free speech online.

The bipartisan (PACT) [.pdf], introduced by Democrat Senator Brian Schatz of Hawaii and Republican John Thune of South Dakota in 2020, would require platforms to disclose their content moderation practices, implement a user complaint system with an appeals process, and remove court-ordered illegal content within 24 hours. While a step in the right direction towards greater platform transparency, PACT could still endanger free speech on the Internet; it might motivate platforms to remove any content that might be found illegal rather than risk the costs of litigation, thereby taking down legitimate speech out of an abundance of caution. PACT would also entrench the already overwhelming power and influence of the largest platforms, such as Facebook and Google, by imposing onerous obligations that small-to-medium size platforms might find difficult to respect.

During his presidential campaign, Joe Biden even called for the , with the goal of holding large platforms more accountable for the spread of disinformation and extremism. This remains a worrisome position and something that President Biden should reconsider, given the importance of section 230 CDA for prohibiting online censorship and allowing the Internet to flourish as an arena for public debate.

Canada

Questions around to how ensure the Internet remains a viable space for freedom of expression are particularly important in Canada, which does not currently have domestic statutory measures limiting the civil liability of online intermediaries. Although proposed with the laudable goals of combating disinformation, harassment, and the spread of hate, legislation that increases restrictions on freedom of speech, such as the reforms described above, should not be taken in Canada. These types of measures risk incentivizing platforms to actively engage in censorship due to the prohibitive costs associated with the nearly impossible feat of preventing all objectionable content, especially for smaller providers. Instead, what is needed is national and international legislation that balances protecting users against harm while also safeguarding their right to freedom of expression.

One possible model forward for Canada can be found in the newly signed free trade agreement between Canada, the United States, and Mexico, known as the (USMCA). Article 19.17 USMCA mirrors section 230 CDA by shielding online platforms from liability relating to content produced by third party users, but a difference in wording [1] suggests that under USMCA, individuals who have been harmed by online speech may be able to obtain non-monetary equitable remedies, such as restraining orders and injunctions.

It remains to be seen how courts will interpret the provision, but the text leaves room to allow platforms to continue to enjoy immunity from liability, while being required to take action against harmful content pursuant to a court order, such as taking down the objectionable material. Under this interpretation, platforms would be free to take down or leave up content based on their own terms of service, until ordered otherwise by a court. This would leave ultimate decision-making with courts and avoid incentivizing platforms to overzealously take down content out of fear of monetary penalties.

USMCA thus appears to balance providing redress for harms with protecting online platforms from liability related to user-generated content, and provides a valuable starting point for legislators considering how to reform Canada鈥檚 domestic online intermediary liability laws.

Going forward

The Internet has proven itself to be a phenomenally transformative tool for human expression, community building, and knowledge dissemination. That power, however, can also be used for the creation, spread, and amplification of hateful, anti-democratic groups and ideas.

Countries are now wrestling with how to balance the importance of freedom of expression with the importance of protecting vulnerable groups and democracy itself. Decisions taken today on how to regulate online intermediary liability will play a crucial role in determining whether the Web remains a place for the free and open exchange of ideas, or a chill and stagnant desert.

Although I remain sympathetic to the legitimate concerns that Internet platforms do too little to prevent their own misuse, I fear that removing online intermediary liability protections will result in the same platforms having too much power and incentive to monitor and censor speech, something that risks being equally harmful.

There are other possible ways forward. We could take the roadmap offered by article 19.17 USMCA. We could prioritize prosecuting individuals for unlawful behaviour on the web, such as peddling slander, threatening bodily violence or fomenting sedition. Ultimately, we need nuanced solutions that balance empowering freedom of expression with protecting individuals against harm. Only then can the Internet remain a place that fosters deliberative democracy and civic participation.


[1] CDA 230(c) provides that 鈥淣o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.鈥 USMCA 19.17.2 instead provides that 鈥淣o Party shall adopt or maintain measures that treat a supplier or user of an interactive computer service as an information content provider in determining liability [emphasis added] for harms related to information stored, processed, transmitted, distributed, or made available by the service, except to the extent the supplier or user has, in whole or in part, created or developed the information.鈥


About the writer

Rachel Zuroff, BCL/LLB鈥16After graduation, Rachel Zuroff (BCL/LLB鈥16, 成人VR视频) went on to work in human rights law at the United Nations Food and Agriculture Organization in Rome, and at the International Court of Justice in the Hague.

She resides in Montreal, where she continues to pursue her interests in human rights and legal pluralism.

Back to top