Facebook Has Closed 583 Million Fake Accounts in the First Quarter of 2018

The social media giant has taken strong strides in the wake of its Cambridge Analytica data scandal to improve the site's image and integrity.

Facebook Logo
Getty

Image via Getty/NurPhoto

Facebook Logo

Facebook’s renewed moderation effort of nearly 1.5 billion accounts has resulted in 583 million fake accounts being closed in the first three months of this year, according to The Guardian.

In Facebook’s first quarterly Community Standards Enforcement Report, the company said most of its moderation activity was waged against fake accounts and spam posts—with 837 million spam posts and 583 million fake accounts being acted upon. Additionally, 2.5 million hate speech posts, 1.9 million terroristic propaganda posts, 3.4 million posts including graphic violence, and 21 million posts featuring nudity or sexual activity were cleared off. 

“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, vice president of Facebook’s public policy for Europe, Africa, and the Middle East. 

If you recall, Facebook published its updated internal moderator guidelines for the first time in company history last month, which explored how the platform distinguishes hate speech and propaganda from regular content. This entire effort comes in the wake of the Cambridge Analytica data scandal, which breached user data of over 87 million people. Competitors like Twitter are taking steps to publicly ensure platform integrity, as well, as this is a vital moment for information services that want to be on the right side of history. 

According to the company’s vice president of data analytics, Alex Schultz, content including graphic violence nearly tripled from quarter to quarter. Why? Well, real-world conflict results in digital presentation of these events. 

“In [the most recent quarter], some bad stuff happened in Syria,” he said. “Often when there’s real bad stuff in the world, lots of that stuff makes it on to Facebook.” Schultz added that moderation in these instances involves “simply marking something as disturbing.” 

In terms of disturbing content, the company’s moderation guidelines specifically distinguish between revenge porn, suicidal posts, credible violence, bullying, and more. When it comes to imagery involving children, however, labels aren’t as important as effectively ridding the site of such content. “We’re much more focused in this space on protecting the kids than figuring out exactly what categorization we’re going to release in the external report,” said Schultz.

Utilizing new artificial-intelligence-based technology, Facebook can find and moderate content more rapidly and effectively than the traditional, human counterpart—that is, in terms of detecting fake accounts or spam, at least. In that case, Facebook claims it used A.I. to locate 98.5 percent of the fake accounts it recently closed, and “nearly 100 percent” of the spam it found. 

Certain things are harder to identify than others, such as blatant nudity in contrast to nuanced hate speech. This is due to basic image-recognition software that can be universally employed, whereas subtleties in language and innuendo make it more difficult to detect the dog whistles and nationalistic call to arms present on the site. 

Ultimately, we all need to remain vigilant and informed as to what these modern institutions of public information are taking down and what they’re keeping up—these are imperfect steps, that could set dangerous precedents down the road, or rather, create a more fact-based, pleasant environment for us. While Facebook is a private company, it does frankly serve as the main media hub for millions of people across the world, and we need to collectively ensure and protect that integrity as these developments continue.

Latest in Life