Earlier this year Reyhan Harmanci published an interview with an anonymous content censor who'd worked for Google. "I dealt with all the products that Google owned," he said. "If anyone were to use them for child porn, I’d have to look at it. So maybe like 15,000 images a day. Google Images, Picasa, Orkut, Google search, etc. I had no one to talk to. I couldn’t bring it home to my girlfriend because I didn’t want to burden her with this bullshit. For seven, eight, nine months, I was looking at this kind of stuff and thinking I was fine, but it was putting me in a really dark place."
At Google, these positions are even less supported than Facebook's half-second monitors, staffed by temps on year-long contracts, most of whom are let go at the contract's end. The desire for safe standards on the Internet is in many ways antithetical to what the medium is as a neutral platform for communication, a channel to send, receive, and archive data. If it exists in the world, shouldn't some record of it exist on the Internet? In a way, the need for standards on the Internet is a desire to fight against the wild and terrible phenomena of our culture in a medium that merely documents phenomena.
To strike evil behavior from the record is not a victory against evil, and the cottage industry of monitors sprung up to ensure the Internet is an appropriate place presents evidence against the feasibility of its own undertaking. From the outside it seems like a simple enough task—censor the things where 13 year-olds call each other sluts from the shelter of anonymous Facebook profiles—but the work required to actually enforce this standard is a sprawling mess of vagary and exceptionalism. The more troublesome the post, the less aware of it we want to be. And now choosing what we should be unaware of is a job someone can have.