The business of online gatekeeping is not what you think.
Written by Michael Thomsen (@mike_thomsen)
There are no standards on the Internet, but the illusion must be maintained. For corporations that host their own communities the task of defining and enforcing standards is a work-a-day job done with few and mostly unclear rules at a pace that guarantees improper enforcement. In an investigation into Internet bullying, Emily Bazelon writes about a Facebook page, Let's Start Drama, created by an anonymous student at Woodrow Wilson Middle School in Connecticut. The page became a trading post for rumors about who had been exchanging sexts, who had lost their virginity, and running polls about which student was prettier.
Let's Start Drama drew more than 500 followers from a school of 750 and was soon discovered by the Middletown Youth Services Bureau, which filed several complaints with Facebook. Facebook didn't respond to any of the Bureau's reports and the page remained live for months. Bazelon visited Facebook's harassment team, responsible for reviewing reports like the one sent about Let's Start Drama, and discovered the primary concern is not protection of Facebook's users but the speed with which reports can be evaluated.
The need for standards on the Internet is a desire to fight against the wild and terrible phenomena of our culture in a medium that merely documents phenomena.
"We optimize for half a second," Nick Sullivan, one Facebook's monitors, told Bazelon. "Your average decision time is a second or two, so 30 seconds would be a really long time." In the case of Let's Start Drama, it turns out the page had been deemed okay and two separate evaluators had indicated that future reports about the page should be ignored. Facebook is aware of the importance of appearing to be responsible for social standards, yet the company is more interested in there being an efficient structure for enforcement more than it is ensuring its standards make sense. Alexis Madrigal cited an acknowledgement of this weakness in the company's annual report, where Facebook admitted it "may fail to provide adequate customer service, which could erode confidence in our brand. Our brand may also be negatively affected by the actions of users that are deemed to be hostile or inappropriate to other users, or by users acting under false or inauthentic identities."
There is always some transference of responsibility that happens when new technologies make old social maladies visible again. While Facebook has the power to amplify our worst tendencies, it is also a trap to think we can address the impulse to bully and antagonize by creating a set of enforceable standards that make antagonistic behavior less visible. The process of monitoring what people will do when presented an open-seeming platform to publish their thoughts, curiosities, or beliefs is inextricably bound to the fact that many of our thoughts, curiosities and beliefs are horrific.
Earlier this year Reyhan Harmanci published an interview with an anonymous content censor who'd worked for Google. "I dealt with all the products that Google owned," he said. "If anyone were to use them for child porn, I’d have to look at it. So maybe like 15,000 images a day. Google Images, Picasa, Orkut, Google search, etc. I had no one to talk to. I couldn’t bring it home to my girlfriend because I didn’t want to burden her with this bullshit. For seven, eight, nine months, I was looking at this kind of stuff and thinking I was fine, but it was putting me in a really dark place."
At Google, these positions are even less supported than Facebook's half-second monitors, staffed by temps on year-long contracts, most of whom are let go at the contract's end. The desire for safe standards on the Internet is in many ways antithetical to what the medium is as a neutral platform for communication, a channel to send, receive, and archive data. If it exists in the world, shouldn't some record of it exist on the Internet? In a way, the need for standards on the Internet is a desire to fight against the wild and terrible phenomena of our culture in a medium that merely documents phenomena.
To strike evil behavior from the record is not a victory against evil, and the cottage industry of monitors sprung up to ensure the Internet is an appropriate place presents evidence against the feasibility of its own undertaking. From the outside it seems like a simple enough task—censor the things where 13 year-olds call each other sluts from the shelter of anonymous Facebook profiles—but the work required to actually enforce this standard is a sprawling mess of vagary and exceptionalism. The more troublesome the post, the less aware of it we want to be. And now choosing what we should be unaware of is a job someone can have.