Facebook removed 700 million in 2017. Staggering numbers.
So much for reliance on artificial intelligence software programs or high praise for them in pinpointing fake accounts. Is it any wonder what Facebook does to accounts, links and news on the platform and what shows up on your timeline or in trending? Perhaps we are now to rely on the new 10,000 Facebook editors. Consider, during the first quarter of 2018, Facebook deleted 865.8 million posts, the majority of which were spam, according to the report. Facebook also removed 28.8 million posts showing everything from nudity that violated its community standards to graphic violence and terrorist propaganda, the report said.
It is interesting that media has not fully responded, as journalists use Facebook trending items to determine lead stories. Perhaps headlines will change or perhaps not so much.
We’re committed to doing more to keep you safe and protect your privacy. So that we can all get back to what made Facebook good in the first place: friends. Because when this place does what it was built for, we all get a little closer.
Facebook has been running ads in markets still working to regain trust and still working to create new people relationships with each other. Users still don’t have a full understanding of user standards and what violations really mean.
Facebook’s first community standards enforcement report says the social media giant disabled 583 million fake accounts in the first quarter of 2018, relying heavily on artificial intelligence.
The report, released Tuesday, aims to show how Facebook is taking action against content that violates its standards. The staggering number of fake accounts it disabled in the period fell from 694 million in the fourth quarter of 2017. The report didn’t reveal earlier data.
The first-quarter report also said Facebook acted on 836 million pieces of spam content, 2.5 million pieces of hate speech content, 1.9 million pieces of terrorist propaganda content, 21 million pieces of adult nudity and sexual activity content and 3.4 million pieces of graphic violence content.
Facebook executives vowed to increase transparency in the wake of recent controversies involving the spread of fake news and the and the unauthorized harvesting of personal data.
“It’s a good move and it’s a long time coming,” Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, told The New York Times of the new report. “But it’s also frustrating because we’ve known that this has needed to happen for a long time. We need more transparency about how Facebook identifies content, and what it removes going forward.”
The report said Facebook increasingly relies on AI to flag unsavory content. AI tools detected 98.5 percent of the fake accounts that were shut down, according to the report, and almost all of the spam content acted upon.
“Technology isn’t going to solve all of it, but we will make progress,” Guy Rosen, who heads Facebook’s team policing community standards, told The Financial Times.
The report acknowledged that Facebook’s metrics tracking its response to content that violates standards are still being refined.
“This is the start of the journey and not the end of the journey and we’re trying to be as open as we can,” said Richard Allan, Facebook’s vice president of public policy for Europe, the Middle East and Africa.
Facebook a day earlier announced it had suspended about 200 apps while it investigates whether any of them contributed to the misuse of data.