Skip to content. Skip to departments. Skip to most popular. Skip to footer.

UCLA

Everything in Moderation

Print
Comments

By James Knutila

Published Sep 14, 2018 8:00 AM


Sarah Roberts studies people who screen social media platforms for obscene and violent content and reveals the toll this work takes on their lives.


Photo by Stella Kaunina.

Sarah Roberts, UCLA assistant professor of Information Studies, coined the term “commercial content moderation” to describe the work of people who screen online platforms for obscene, violent and criminal content. Her research examines the labor conditions and mental-health impacts on the workers who sift through the worst of the Internet for YouTube, Facebook and other companies. Roberts was named a 2018 Carnegie Fellow for her work illuminating the unseen people and practices of the digital world. Here, she shares insights into the daily life of commercial content moderators [CCMs] and offers a solution for social media’s fake news crisis.

What sparked your interest in the Internet and information science?

I was a pretty serious devotee of the humanities as an undergrad. But when I started at the University of Wisconsin, a friend of mine passed me a Post-it note with some letters punctuated by periods and said, “This is the BBS [bulletin board system]. This is the Internet. You should go on this.” I said, “What’s that?” I had heard of email and had been a computer user for my whole life, but this was a new dimension of social interaction online. And so, for the next year, I tied up the dorm room phone line by being jacked into a 14.4 modem, and experienced this uncharted territory.

When did you decide to focus your research on content moderation?

At the end of my first year [of a Ph.D. program] in Illinois, I was reading The New York Times and [saw] a story on people working in a rural Iowa call center. I was surprised to learn not only that there were call centers in rural Iowa, but also that they were not answering phones but actually looking at user-generated content, going to unnamed social media sites and websites and deciding on whether it was appropriate or not. And it sort of clicked for me: What is it like if your job is to look at awful stuff all the time? What kind of training and support do you have? How much money do you make? What happens when you go home? Also, it was clear that what they were doing was mission-critical. A company would never open up a channel and say, “Upload whatever you want.”

You’ve said that content moderators typically sign nondisclosure agreements; yet they choose to talk with you. Why?

I never approach them asking, “What’s the grossest thing you’ve ever seen?” [I ask], “Why do you do it?” And they have a lot of insights they want to share — whether it was how they’d helped a person find mental-health services, which frequently happens, or a sense of making a usable platform. They tell me, “I do this so you can even get on the Internet. If I weren’t here, you wouldn’t be able to stomach it.”

What is this work’s impact on one’s mental and physical health? Is it possible to do this job without incurring negative effects?

I can’t imagine so. I am cautious about diagnosing because I’m not a psychologist, but everyone has reported to me deleterious effects — whether it’s difficulty in a romantic relationship, trouble sleeping, coping using alcohol or other substances, or avoiding gatherings of friends where everybody talks about their job because you don’t want to talk about it ... it’s embarrassing. I estimate that there are 100,000 people doing this work at any given time, and it’s not long-term for most people. So, what are we going to do with these legions of typically young people that we spiral back out into the world, who’ve seen the worst of the worst? There are two outcomes. One is that you become so hypersensitive you have to walk away. That happens to a lot of people, for sure. But even more disturbing to me is, what happens when you become so desensitized that you’re no longer effective?

So what kinds of benefits or services do CCMs need to protect their health?

We had a conference back in December at UCLA called “All Things in Moderation,” and our closing plenary was a woman who did this 10 years ago and a woman who currently does this on Amazon Mechanical Turk. [The latter] took a question from the audience, which was, How can we better support someone like you? She said, “Pay me.” Especially in the United States, there’s no guarantee that workers have medical coverage or any mental-health component. And how much disposable income do you have for mental-health services? So I think [we need] appropriate rates of pay, and clear and understood career pathways into some other area of work in a company.

What does a typical day look like?

In the Philippines in particular, and in third-party outsourcing situations, productivity metrics are used to bid for contracts. So, [someone says,] “Our workers can do X number in this time,” and then an Indian company might say, “Well, our workers can do Y for the same amount of money.” Productivity metrics lead them to review thousands of images or videos a day, and then there’s also a qualitative element where they’re randomly checked for quality assurance. Did you make the right decision in this case? Yes? No? If you didn’t, [and if] you do that X number of times, you’re fired or suspended. There is a lot of pressure. You know, think about how deep an analysis one can do in 15 seconds. What if it’s footage of a war zone that someone has uploaded for advocacy purposes, and all you see is gore?

You’ve described a wide variety of difficult judgment calls that arise on social platforms. Can CCMs help address disinformation and fake news?

The way commercial content moderation is undertaken — with productivity metrics, outsourcing to various parts of the world, content that's totally decontextualized — makes it hard to adjudicate [fake news] in that context.

If CCM isn’t the answer, how do tech companies plan to address problems with questionable content?

Every time they’ve been caught in a bind over the last two years, I’ve noticed that they invoke this aspirational notion of [artificial intelligence] — that it’s just almost here: “AI’s going to be able to do this. We’re almost there.” The past few years we’ve been hearing that.

Is that a realistic option? Can we trust these companies to regulate themselves?

You didn’t have the foresight to see how your platforms were going to be misused, so maybe you’re not the best ones to come up with the solutions. That’s a hard pill to swallow for an entire industry that believes it’s just one step away from solving it all. I teach people who are studying to be librarians and information professionals — experts at understanding information sources. Why are we pretending those people don’t exist? Why don’t we shore up our nation’s libraries? I see libraries as one of the most important pieces in this puzzle.

I would submit that to the readers: Look no further than your neighborhood branch. Find out what’s going on, how you could support libraries, or how you could support people studying in librarianship and getting a master’s degree to go out and be these public servants that we desperately need. No matter your political persuasion, we can all benefit from that.

Comments