The Trump presidency brought on a fierce debate about content moderation on social media.
Democrats call on Internet giants to moderate their content more, citing its effect on democracy. Republicans do the opposite, calling for less content moderation under the belief that policies and their implementation bias the left.
Some say that Section 230 of the Communications Decency Act, the legislation that allows user generated content to thrive on the Internet, is outdated and should be amended. Others believe doing so would be the death of the Internet as we know it. And where does the First Amendment fit in?
It's a thorny problem with no good solutions, which left us concluding that we’re probably trying to solve the wrong problem.
—
Free Speech & Social Media Essentials:
Positions of Congress
- Left: More regulation because the current situation compromises democracy through the propagation of misinformation and disinformation
- Right: Less regulation because current content moderation policies bias the Left
Legal Landscape
First Amendment
- The Free Speech Clause of the First Amendment provides that “Congress shall make no law . . . abridging the freedom of speech” and applies to the “State[s]” through the Fourteenth Amendment. Thus, the First Amendment, like other constitutional guarantees, generally applies only against government action
- This doesn't mean speech is never restricted; political speech is the most protected, followed by commercial speech, and then speech advocating for violence
- Social media companies are protected from regulation by their first amendment rights
Section 230
1: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another content provider.”
- Implication >> Social media companies are not liable for content generated by users on the platforms
- Content provides are still legally obligated to remove some types of content, such as hate, violence, and child porn
2a: action taken in good faith to restrict access to or availability of material the provider considers obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
- The terms "in good faith" and "otherwise objectionable" leaves a lot of room for legal interpretation.
- In practice, social media firms can largely moderate content however they want to.
Amendments to 230 have happened. In 2018, the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) required the removal of material violating federal and state sex trafficking laws.
Regulatory models to consider:
1) Public square: restrictions on speech should be the same as the public square
- A “public function” test, under which the First Amendment will apply if a private entity exercises “powers traditionally exclusively reserved to the State.”
- More free speech at the user level and less regulation at the company level >> more misinformation and information ecology pollution
- Lower courts have uniformly concluded First Amendment does not prevent social media providers from restricting users ability to post content on their network
2) Special industries: e.g. broadcast. Need to protect public access to the speech given their services.
3) Publisher: full protection of the first amendment
Mechanisms for content moderation
(1) Do nothing
(2) Remove content
(3) Limit propagation
- What if you have a right to speech but not propagation of speech? Implies a distinction between voice and propagation of voice
- Facebook does this with borderline content as it is — engagement goes up exponentially as it approaches the policy line, but users say don’t like it, so algorithm tunes it to decrease exponentially instead
- Note: what’s added to the post by the platform is not protected by 230, but original content still is so no liability issues
(4) Label
- Example is Twitter's labeling of President Trumps tweets as containing misinformation, but keeping them up
- Note: what's added to the post by the platform is not protected by 230, but original content still is so there aren't liability issues
Content moderation at Facebook (it's actually pretty impressive)
How are policies defined:
- Global team, gets input from outside experts
- Meets every few weeks
- Invites academics and journalists to the meetings
- Publishes minutes to increase transparency and accountability
How are policies enforced?
- Internal guidelines to reduce subjectivity
- Humans: reports from users then reviewed by humans, FB has thousands of people doing this. Issue is that harm gets done during time content is up before it's taken down. Also highly manual and expensive.
- Machines: some content can be removed more effectively with AI, like nudity in images. Hate speech has been less effective.
Appeals Process
- Oversight board: fully independent, represents Facebook community in composition and is endowed with no business incentives.
- Note Facebook had the Oversight Board make the decision to uphold its suspension of President Trump's account