A few weeks ago, we talked about the antitrust hearing featuring Alphabet, Amazon, Apple, and Facebook in 🤖 Big Tech and Antitrust: The Capitalism Paradox. Along the same lines, this past Thursday the Senate Commerce Committee voted to move forward with subpoenas for CEOs across social internet companies. Again, we will see the CEO of Facebook, Mark Zuckerberg, and the CEO of Alphabet, Sundar Pichai, take the virtual stand. This time Jack Dorsey, the CEO of Twitter, will join them. Instead of antitrust, however, the power of these companies is going to be questioned through the lens of Section 230.
Section 230 is a hotly debated topic in the political and technology sphere. For those not familiar, Section 230 is the law generally regarded as a major reason and catalyst for the spread of the social internet through social platforms created by companies like Facebook, Google, and Twitter. This law protects these companies by removing liability for what’s posted on their platforms by their users. This means that if there is an indecent or violent video posted to these platforms, they are not liable for any damages to their users, like you and I. This removes the right for us to sue these platforms for the content that others post. While that seems like a get-out-of-jail-free card for these platforms, it’s a massive reason why the internet is as we know it today.
Outside of these social media platforms, Section 230 also protects ISPs (Internet Service Providers) like Comcast, blog platforms like tumblr, and any other site that allows content to be posted freely by their users. This is a fundamental law put into place to make sure to protect these platforms who provide a service to users who are exercising their right to free speech using the tools provided.
Like the last antitrust hearing, this hearing on Section 230 will put the leaders of powerful platform-based technology companies on stage to answer question after question from members of the US Senate. What’s different this time though is the fact that what’s in question isn’t just the power of these companies, but the ability for every other website, company, or tech product to provide content tools for consumers with limited moderation. While the CEOs being subpoenaed this time around lead these large companies with many layers of platform power, they represent only a small sliver of the rest of the internet.
While I have cynicism and a belief that a spectacle like this hearing will not result in much more than a partisan debate about how these companies wield power in the fight for misinformation, I do fervently believe that the fact that Section 230 is being questioned is yet another example of a lack of understanding of the foundations of the internet and how it impacts more than just the Big Tech giants. It’s a response to the recent content moderation practices by Facebook and by Twitter that have tripped a wire of political and regulatory scrutiny over the power of these platforms, and are dragging the spirit of the law and its impact on the broader internet along.
Taking a look at Section 230, the problem for regulators sits in this clause:
(1) Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
This clause in particular is broad. Like we talked about above, it shields more than just the social networks or search engines. It shields foundational parts of the internet, like the ISPs stated before, who are a key piece of internet infrastructure, working to get you internet in your home or on your phone.
Imagine that a friend sends you an email. The bits and bytes of that email travel through many layers of computation and the internet. The ISP has the job of allowing that information to get to your devices when you type https://www.mail.google.com and hit enter. When you hit enter and open up that email, the wires and airwaves that moved that information to you are protected under Section 230. If they weren’t, your ISP may be liable if there was some content in that email that was cause for legal concern.
There are many other layers of the internet that we all take for granted that could be impacted beyond social networks and ISPs like CDNs (Content Delivery Networks), cloud providers like AWS that host content, and SaaS companies who provide many types of software solutions to companies around the globe. What this means is that each layer would need to add additional legal scrutiny and content moderation into their platforms while also beefing up their already massive team of lawyers to protect against any and all claims from billions of internet users.
Many folks in Silicon Valley use the term scale to talk about unlimited growth patterns with 0 marginal cost for each additional unit added. In concrete terms, when you build Facebook, you build it once and each additional user costs the company close to 0, outside of traditional online marketing costs or other forms of customer acquisition. It’s this ability to scale that makes these companies so profitable so quickly. If the innards of Section 230 are dismantled, the ability for any platform to scale, including one that someday may compete with Facebook, is greatly hampered.
Breaking down recommended changes
Outside of the broad impact of potentially defacing Section 230, and back to the problem that regulators have over these platforms, it’s a bit more clear what the motivations for the regulators are by looking at the justice departments recommendations issued in July. These four recommendations seek to:
- Incentivize Online Platforms to Address Illicit Content
- Promote Open Discourse and Greater Transparency
- Clarifying Federal Government Enforcement Capabilities
- Promote Competition
Let’s look at each, starting with Incentivize Online Platforms to Address Illicit Content.
The first category of recommendations is aimed at incentivizing platforms to address the growing amount of illicit content online, while preserving the core of Section 230’s immunity for defamation claims. These reforms include a carve-out for bad actors who purposefully facilitate or solicit content that violates federal criminal law or are willfully blind to criminal content on their own services. Additionally, the department recommends a case-specific carve out where a platform has actual knowledge that content violated federal criminal law and does not act on it within a reasonable time, or where a platform was provided with a court judgment that the content is unlawful, and does not take appropriate action.
This I can get behind. I think each platform should make a very solid and concerted effort to remove illicit content from their platforms in the safety of consumers. Though, how this is implemented is important. It should vary in severity of punishment for breaking a revised law due to complexity; why it was more blanketed in Section 230.
Finding and taking down content is hard. No model is perfect. There are ways around it, and human moderators don’t scale well, even when Facebook alone employs thousands of people to do just that. Increasing this will cause these companies to have to take different and more concerted efforts to fix the problem.
A second category of proposed reforms is intended to clarify the text and revive the original purpose of the statute in order to promote free and open discourse online and encourage greater transparency between platforms and users. One of these recommended reforms is to provide a statutory definition of “good faith” to clarify its original purpose. The new statutory definition would limit immunity for content moderation decisions to those done in accordance with plain and particular terms of service and consistent with public representations. These measures would encourage platforms to be more transparent and accountable to their users.
Starting to get a bit murky given that technology and content creation moves so quickly, it’s hard to keep up with how to moderate that content, let alone how to be extremely specific in a game of whack-a-mole on what type of content must be laid out in the terms of service in order for these large companies to take action.
If every time a company needs to take something down to protect the end consumer, they need to make sure it’s 100% and specifically covered by their terms of service, we might find ourselves in-between the first category above and this second proposed change.
The third category of recommendations would increase the ability of the government to protect citizens from unlawful conduct, by making it clear that Section 230 does not apply to civil enforcement actions brought by the federal government.
This sounds like it’s a pure open play for law enforcement to completely side-step any Section 230 protected action and force further oversight on technology companies. This is concerning to me, given the latest news on misinformation being spread by government officials.
If this were to be taken up, it means that these companies would have to allow open and free speech, even in the face of misinformation. Misinformation that, depending on who you are talking to, could fall under the first category of illicit content, effectively negating consumer protection if items are posted by a government official.
A fourth category of reform is to make clear that federal antitrust claims are not, and were never intended to be, covered by Section 230 immunity. Over time, the avenues for engaging in both online commerce and speech have concentrated in the hands of a few key players. It makes little sense to enable large online platforms (particularly dominant ones) to invoke Section 230 immunity in antitrust cases, where liability is based on harm to competition, not on third-party speech.
Perhaps I am misunderstanding this last point, but from what I see, while there are specific laws called out that would be in the direct line of the spirit of Section 230, bringing antitrust up here doesn’t follow that same logic. Again, this feels like the Senate trying to set up the antitrust cases and cover any open holes, even if it doesn’t make sense.
We don’t know if the information and questions that will be presented at the hearing on October 28th will reflect each of these recommendations verbatim, but if they are a signal to what we will see, there’s a large need for the Senators to gain a larger perspective on how these companies can be held liable for content moderation practices without causing undue harm to the ability for these companies to grow.
If the changes proposed to Section 230 are truly in the spirit of consumer protections, then holding these companies accountable to better content moderation is a great goal. If the spirit is to battle these platforms on misinformation campaigns and remove protections for antitrust, then I think it’s important to think back to the spirit of Section 230 and adjust course. Who are we really trying to protect?
Moreover, it’s important that there’s a clear understanding of what Section 230 actually covers and how writing up new legislation for larger companies, like those subpoenaed to the hearing, may lead to unintended consequences for the rest of the internet. A slight tweak to the language in Section 230 can have large ripple effects in the amount of liability for foundational pieces of internet infrastructure, effectively diluting the ability for companies without knowledge of content to operate as we know it today.