Facebook has released its first report on spam and fake account takedowns to demonstrate how effectively it’s enforcing its community standards.
In the first three months of 2018, Facebook disabled 583 million fake accounts and says this was usually done within minutes of registration.
The implication is that had it not taken this action, nearly a quarter of the social network’s 2.2 billion users would have been fake accounts.
However, due to the takedowns, the company estimates that between three to four percent of its monthly active users during Q1 were fake.
This figure would mean there were an estimated 66 million to 88 million fake accounts on Facebook, even though it had removed over half a billion accounts.
The report goes back to October 2017, revealing that fake account takedowns were even higher in the quarter preceding 2018 Q1, totaling 694 million accounts. Facebook notes that the variations are “driven by new cyberattacks and the variability of our detection technology’s ability to find and flag them”.
The chief aim of the report, which will be updated twice per year, is to measure the effectiveness of Facebook’s enforcement of its community standards and the ability of Facebook’s artificial intelligence (AI) to flag content before it impacts users.
YouTube released a similar report in April showing how machine learning was helping it take down content before users view it.
Twitter last month also told investors it had cut off 141,000 applications from the Twitter API for rules violations. The apps were responsible for 130 million low-quality tweets.
Facebook says it took down 837 million pieces of spam in the quarter and nearly all of it was found before any human reported it.
“Thanks to AI tools we’ve built, almost all of the spam was removed before anyone reported it, and most of the fake accounts were removed within minutes of being registered,” Facebook CEO Mark Zuckerberg said in a post announcing the report.
The report follows Zuckerberg’s testimony in April when he promised greater transparency and responsibility as part of its plan to rebuild trust frayed by the Cambridge Analytica scandal and fears that Facebook had become a tool for spreading Russian propaganda during the US election.
The fake accounts metric aims to quantify the percentage of monthly active users that were on Facebook during a given period and the total number of removed fake accounts.
Facebook also measures how many pieces of violating content it took action on — including nudity, graphic violence, and hate speech — and the percentage flagged by its AI before users report them. Facebook is also considering measuring the number of views on a piece of content before it took action.
Facebook says it also took down 21 million pieces of adult nudity and sexual activity. AI was successful at detecting nudity, capturing 96 percent of this content class before it was reported.
It was also successful at catching graphic violence with 86 percent of 3.5 million pieces of content taken down before a public report.
However, Facebook’s AI was far less effective at capturing hate speech, with just 38 percent of 2.5 million pieces of hate speech removed before humans reported it.
“Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue,” Guy Rosen, Facebook’s VP of product management, wrote in a blogpost.
Previous and related coverage
Amid the ongoing trust crisis, Facebook users get an easier way to download their data and new mobile privacy settings.
Exclusive: Profile data was scraped without user consent or knowledge to “build a three-dimensional picture” on millions of people.
Far-right leader’s win over Facebook in a German comment case could have international ramifications.
Facebook’s chief executive implied a law wasn’t necessary to cover teenagers on the social network.