Reddit owes its moderators more than an updated hate speech policy



Should Reddit pay its volunteer moderators? The thought had not really occurred to me until last week, when I joined a call with CEO Steve Huffman and his general counsel, Benjamin Lee. The executives were briefing me and some other reporters about a significant expansion of the site’s content moderation policies, which were unveiled on Monday and resulted in the removal of 2,000 subreddits, including the notorious forum for hate speech The_Donald.

Reddit’s content moderation scheme differs sharply from the Facebook News Feed or the Twitter timeline. Instead of applying one set of rules for the entire user base, Reddit sets a “floor” of rules that no one can violate but allows individual forums (called subreddits) to raise the “ceiling.” (I wrote about this approach here last month.) This helps create a collective context for discussions, and allows users who have similar values to come together in a shared space online. It’s why I can go to my favorite subreddit, which covers the world of professional wrestling, and find an incredible assortment of relevant stories, pictures, and discussions every day. But it also works only because of the moderators who volunteer to enforce a subreddit’s rules from floor to ceiling — and those moderators are totally unpaid.

In describing how his thinking on the subject had evolved, Huffman said he had been moved by moderators’ stories about their struggles to keep discussions civil. It’s a truism of life online that every forum eventually descends into drama, and keeping participants from flaming one another requires constant, even heroic vigilance. But the moderators themselves often become collateral damage, Reddit executives said.

“To me personally, what was the most heartbreaking was listening to moderators, who would talk about getting these crazy death threats,” Lee said. “And they would say something like, ‘oh, but I understand that that’s protected as a matter of free speech.’ And my first reaction is, oh my God, that’s not true! The first amendment does not protect that! … There’s a disconnect there — the fact that they could on any level believe that the law, let alone our policies, don’t protect them is a fail. And it’s a fail we need to fix.”

One way you can address this policy is rolling out new rules that more expressly prohibit hate speech, as Reddit did this week. But if you’re the sort of person who thinks of content moderators as first responders — people whose work is more in line with a police officer or a firefighter than it is typically given credit for — you might not think that’s enough. There are a lot of jobs you can do for which you will not routinely be subjected to death threats; moderating a web forum is not one of them.

If that’s the case, isn’t the least Reddit could do to pay these moderators?

Reddit is a 15-year-old company with a fairly complicated corporate history involving a sale, a spinout, and a new life as an old startup. The company sells premium memberships and advertising. According to TechCrunch, it has more monthly active users than Twitter, is valued at $3 billion, and last December was on track to earn $267.1 million in 2021.

I asked Huffman and Lee whether the moderators who keep all those subreddits humming didn’t deserve something beyond an update to the content policy — and didn’t quite get an answer.

“The mods — and there are many thousands of them — are are a critical part of this equation,” Huffman told me. “The message that we’re trying to convey … is, we can do much more to support you. The mods — the value that they get out of this, or at least what they want to get out of this, is in being leaders of these communities, and creating a space for their passions. And them having to put up with a bunch of garbage to do so is not fair.”

Huffman said that “how our relationship with them evolves over time may be a topic for a different conversation [about] how Reddit the platform evolves.” But he suggested the company might revisit the question over the next few years.

“The policies are just the first step in the journey and that partnership with the mods,” Lee added. “There’s a whole bunch of other things we are both looking at and thinking about in terms of supporting them, including product improvements.”

Policy updates and product improvements are great. But I suspect that, for a lot of moderators, cash would go a lot further.


The Facebook ad boycott continues. On Wednesday, ads from 400 brands disappeared from Facebook and Instagram, Reuters reported. The coalition of civil rights groups that is leading the boycott has meetings planned with company executives, and CEO Mark Zuckerberg has agreed to join. Most of the biggest advertisers are not yet participating. But more are pulling their ads each day.

Nick Clegg, the company’s head of policy and communications, put up a blog post making the case that Facebook does a pretty good job of removing hate speech:

“More than 100 billion messages are sent on our services every day. That’s all of us, talking to each other, sharing our lives, our opinions, our hopes and our experiences. In all of those billions of interactions a tiny fraction are hateful. When we find hateful posts on Facebook and Instagram, we take a zero tolerance approach and remove them. […]

Unfortunately, zero tolerance doesn’t mean zero incidences. With so much content posted every day, rooting out the hate is like looking for a needle in a haystack.

I continue to be rather cynical about the motives of the brands here, for reasons I laid out on Monday. But I’d note that Clegg’s post reinforces a point I made then: what has been presented as being primarily a story about policy and enforcement is actually a story about Facebook’s size. A company that hosts 100 billion messages a day is a company that is going to host a lot of hate speech, period.

If advertisers sincerely wanted to address Facebook’s hate speech problem, they would start there. But the whole appeal of Facebook to them is its vast reach. So I’m not holding my breath.