Let us sue social media sites!

The Mystical Section 230

Originally published March 21, 2023

on mkzjoybrennan.substack.com

At the end of last month, the US Supreme Court heard arguments to change the existing (& pretty outdated) interpretation of a federal law immunizing websites from any liability for dangerous content users post on the host site. This law, known as “Section 230,” has faced increasing criticism with the increase in dangerous content on social media: everything from revenge porn and cyberbullying to election denialism and Covid misinformation. Regardless of the content, thanks to Section 230, sites like Twitter, TikTok, Youtube, Facebook, &so on can’t even be sued for what they boost and disseminate to their many users.
We covered arguments in the Section 230 SCOTUS case on last week’s livestream with The Last Podcast Network’s Some Place Under Neith (SPUN). SPUN’s Natalie Jean and Amber Nelson cover missing (literally or societally overlooked) women and girls on their podcast. We thus confront Section 230 when exploitative content featuring minors is posted online, not removed by host sites, suggested by the site’s algorithms, and can even generate revenue for an abuser via the host site. Right about now, I should offer the caveat that like many lawyers, a tech expert I am not, so bear with me as we navigate the virtual world.
§ 230: where did you come from, where didya go?

Section 230 refers to Section 230 of the Communications Decency Act of 1996. It states that no provider of “an interactive computer service” will be treated as “the publisher or speaker” of a user’s content. Calling a website an “interactive computer service” is our first clue that § 230 was the product of another time. Quaint!

The case that first applied § 230 to anonymous website content—  1997’s Zeran v. America Online (i.e., AOL!!!) – also reflected the vastly different internet world. Plaintiff Zeran attempted to sue AOL after an anonymous AOL user kept posting fake ads (for merchandise mocking the Oklahoma City bombing? Weird, poor-taste stuff—like, ‘Murrah Building Daycare’ shirts) with Zeran’s information listed for purchase. So, furious people kept contacting Zeran (not to buy the merch, but to yell at him as the listed merch-maker. Pretty effective burn).

Anyways, the Federal Court ruled that AOL couldn’t be held liable, and only the anonymous user/culprit could be a lawsuit’s target. Ever since, we’ve lived in the house that AOL Chatrooms + OK City Bombing Satire built.

We could find many distinctions between the ‘90s/Zeran internet landscape and today’s: sites in general had less impact, fewer users/traffic, etc. (more about that later). Of course, online platforms can and do decide to police content themselves, but there’s a big difference between a social media site choosing its own guidelines + how to enforce them, and outside oversight + legal consequences.  
Side note: if any of you are wondering how legal repercussions for posting content square with First Amendment freedom of speech, good question, long answer! Short answer: there are forms of speech with limited (or no—see: child porn) protection, d/t weighing the expressive value of the speech against its harm to others. That’s why you can’t shout fire in a crowded theater, and why newspapers can be sued for libel.  Holding websites responsible as “publishers” would be a similar standard. For a longer answer, check out my episodes on free speech riiiiight here:  https://www.mkzjoybrennan.com/videoeps/episode-02- (also listenable via the XXceedingly Persuasive pod feed).
The effect o’ § 230 immunity

As hinted, self-policing by private platforms doesn’t work. Think of the ADA (my fav proof that private industry self-policing doesn’t work, see last Substack tm): without legal accountability creating uniform & enforceable standards, internal content moderation is inconsistent and ineffective. Sure, it’s better than nothing, but in the case of social media, it doesn’t do enough to prevent publishing and circulating dangerous, violent, or exploitative material.  

Website self-policing doesn’t really hold users adequately accountable, either. Because websites have nothing to lose legally, they have no incentive to keep track of who creates new accounts, impersonations, or whether accounts are tied to real (i.e. traceable) people.

I’ve worked on a few social media cases, where an alleged victim is trying to identify then sue the person behind a username and…it’s a wild goose chase to even start a lawsuit. I thought there might be some official record, or requirement to hand over identifying info and there just isn’t. So, all you can do is start a nice, pricey, futile lawsuit against @HotChunk420.
Last week, a listener (who’d been impersonated on Instagram) similarly asked why they can’t just sue the account holder, rather than Instagram/Meta. Technically, they could! But on top of those^ same problems identifying the user and throwing money into a lengthy legal process, let’s look at the best case scenario: you win the case and are awarded damages. The defendant user on the hook to pay could be in another country, a minor, completely broke, or just someone you don’t want to have a longstanding relationship with as you chase down payment. Obviously, the damages of online abuse aren’t just monetary, but “pocket depth” is another reason why social media companies, rather than random users, are better equipped lawsuit-parties.  
So, SCOTUS Case:

The case heard by the Supreme Court last month (no decision yet) involves state laws about content moderation that’re being challenged under § 230. The content in question is ISIS recruiting propaganda. Using terrorist material is a good strategy—it’s unquestionably bad and won’t ignite distracting moral debates in the conversation about § 230 (this goes doubly in the majority conservative Court, where e.g. Covid misinfo might be treated differently). On the subject of SCOTUS demographics, this is just… the funniest group of solemn, becloaked oldies to be tackling social media tech and the rapidly changing culture around it-- uncharted territory even for relative experts! It’s a hoot.

 Lawyers arguing to get rid of § 230 immunity point out the differences between the mid-nineties internet landscape—when sites were just trying to entice use of the internet—and the landscape today. Today, the big sites in question make most of their money from advertisements. This in turn has created an incentive for sites to keep users on longer—enter algorithms and “up next” features. These site-generated suggestions and encouragements to view dangerous, illegal content arguably change the nature of websites’ responsibility.
Remember, rolling back § 230’s sweeping immunity doesn’t mean websites will always be held liable—it just opens the door to try to bring suits against the websites. It’s not that radical an idea, either: publishers of magazines or books can be liable for law-violating content they publish; and “common carriers” (like utility companies or package delivery services) have higher legal obligations to oversee what they convey between third parties.

There’s a possibility that the eventual SCOTUS ruling may not fully decide what to do with § 230. Sometimes, SCOTUS will bounce the issue to Congress by sending a (non-binding) message in their decision. This can happen when there’s a clear “separation of powers” issue, and the Constitution delegates a particular law-making power to Congress (e.g. political questions, the Court likes to send those to a legislative body or electorate. Thinking confusedly about Bush v. Gore? So are we all). It can also happen when the Court more “unofficially” wants to dodge the subject. Hopefully, Congress takes a hint like this as a good cue to act sooner rather than later, but we shall see!
So, there you have it! I’ll keep ya posted. On our SPUN stream, Natalie, Amber, and I discussed some of the other, non-§ 230 efforts to protect minors on social media currently in the works. We also went over some of the more egregious examples, and how sites and legislators could better define the content worth targeting. The whole subject is so important as we all grapple with social media, and vote, and advocate. Our livestream on protecting minors online (and more) will return the second Thursday of every month 5:30 PST/8:30 EST, so plz plz come join at twitch.tv/lastpodcastnetwork !

Also I’d be remiss not to mention that yes, the former President of the U S of A (*rump) is facing at least one (1) criminal indictment. Jeez! What’s a boy to do?!

In any case, love you, and as my aunt and Warren Zevon have said, enjoy every sandwich!

Previous
Previous

True Crime Pays & 'Left Unsaid

Next
Next

Anti-Drag Laws