Facebook’s use of artificial intelligence to monitor hundreds of millions of social media posts per day may be efficient and cheap, but it also may be leading to many errors.
On Sept. 8, Nick Wright’s personal Facebook account was suspended after being flagged by the company’s AI moderation system. This meant that Wright could no longer manage the popular Facebook group San Francisco Remembered along with more than two dozen history-themed Facebook groups under the umbrella page History Alliance, which has over 1.3 million members. For Wright, what started as an enjoyable hobby nearly a decade ago has since become a painful headache, fueled by issues with Facebook’s evolving AI moderation systems.
As the sheer volume of content posted online has proliferated in recent years, Facebook, along with other social media platforms including Instagram, Twitter and YouTube, have revved up the use of AI to flag and remove content that violate its policies on, for example, hate speech and misinformation.
Artificial intelligence helps Facebook take down problematic content more quickly and lessens the amount of violent or graphic content that can be distressing to the company’s human moderators, who numbered 15,000 in 2020, according to MIT Technology Review. But lacking a human capacity to judge context and nuance, the AI systems inevitably lead to erroneous takedowns with few options for correction for Facebook Group administrators like Wright.
Which is why Wright felt that threatening to close the San Francisco Remembered Facebook Group on Oct. 8 was the only way that he could save it.
“All hell broke loose,” Wright said, after word of possible closure hit the group, where over 160,000 members share old photographs and memories of The City daily. “People were crying. People were frantic.”
Groups have been a key part of Facebook’s marketing strategy in recent years. They are promoted as community-focused spaces on the platform where users connect over similar interests. In 2020, the company debuted its first Super Bowl commercial, which ended with the tagline “Whatever you rock, there’s a Facebook Group for you.” Meta, as the umbrella company that runs Facebook is now called, has since made scores of other ads promoting groups focused on mental health, dance, houseplants and more.
But the backbone of these groups are a legion of volunteers who take on the work of managing the groups as founders, administrators and moderators. “It’s all encompassing,” Wright said of the time and effort required to manage the groups and its 35 volunteer moderators, which he equates to a full-time job. He said the work has only gotten more difficult in recent years.
A February 2021 press release published by Facebook confirms the company has upped its use of automation in content moderation. In the final three months of 2020, the company states 97% of hate speech taken down from Facebook was spotted by automated systems before any human flagged it, up from 24% in late 2017.
But the increase in takedowns also points to possible over-moderation. “They’re shifting the way the algorithms work and pushing them in a direction where they flag more things incorrectly,” said Jenna Burrell, a former UC Berkeley School of Information professor who serves as director of research of the nonprofit Data & Society. “The reality is that they don’t work very well. They produce a lot of errors.”
Burrell pointed to another benefit AI brings to Facebook: the company’s bottom line. “They can make a ton of money and have very few employees for it,” she said.
Meta — which includes Facebook, Instagram and Whatsapp — currently employs about 83,000 people globally. In comparison, more than 2 billion users are active on Meta apps daily, according to 2021 SEC filings. The company reported $6.7 billion in profit in July.
Meanwhile, volunteer group administrators like Wright are not paid and left to sort through the errors made by the AI systems — and are sometimes penalized for those mistakes. For example, four years ago a member of one of Wright’s Facebook groups posted a photo of Jeffrey Dahmer with a two-sentence caption describing him as a serial killer. Wright received a notice from Facebook on Oct. 6 that the Dahmer post went against the company’s community standards on dangerous individuals and organizations. “Your Group is at risk of being disabled and has reduced distribution,” the notice said.
Wright disagrees that acknowledging the existence of Jeffrey Dahmer goes against Facebook’s community standards. “It wasn’t glorification,” Wright said of the post, which fit into the group’s theme of American history. Wright challenged the decision, but the notice remains on his account. “It’s just a huge waste of our time,” he said. “We get hammered for volunteering.”
Facebook did not respond to requests for comment for this story. A Facebook spokesperson told KQED in response to its Sept. 14 story that the company rejects Wright’s characterization of what’s happening. The KQED story included this explanation from its spokesperson about why Wright’s account was suspended: “The history moderators are bringing trouble on themselves in a variety of ways: using more than one personal Facebook account, a violation of Facebook terms; uploading the same posts in multiple groups at ‘high frequency,’ which qualifies as spam; and repeatedly posting material that Facebook AI believes they don’t have the copyright to.”
Ex // Top Stories
The Thursday decision in Sackett v. EPA weakens federal protections but not California’s stricter wetland preservation laws
The recalled district attorney also defended himself from his successor's criticism of the prosecutions against police officers he pursued
Breed launched a new $3 million pilot program that will send outreach workers to non-emergency, non-medical 911 and 311 calls involving unhoused people
Wright does have two Facebook accounts that he has used to found groups. “I didn’t want to attract attention to myself,” Wright said. That’s why about eight years ago he opened an account under the name San Francisco History to start local history groups. Facebook later made him change the name to Nicolas Wright, which is the personal account the company disabled in September.
“We get everybody and their brother trying to friend us,” said Bear Ridgeway, who is a volunteer moderator for multiple of Wright’s Facebook groups including San Francisco Remembered and First American Nations. “Sometimes people make a second account just to remain within their family or friends.”
It’s part of the nuance Ridgeway said Facebook’s AI misses. “We never get to speak to a real person. It’s very challenging,” she said.
Tired of Facebook’s automated systems, Wright became determined to reach a human who works at the company to explain his situation and try to get his account back.
“The only communication I had was through the groups,” Wright said. “And that was my last resort.”
He began by closing eight of his Facebook groups. (Closing a group is similar to pausing it, where old content remains but no new content can be posted.) When that didn’t work, he reluctantly turned to the most active group of the lot, San Francisco Remembered, hoping to get the attention of a Facebook employee.
“You’d think at least a couple of people who work there would be a member of one of the largest blogs in San Francisco, right?” Wright said.
The approach worked. A Facebook employee who is also a member of San Francisco Remembered reached out to Wright and within one day, his personal account was restored. All of the History Alliance groups have since been reopened.
Wright’s approach to getting an employee’s attention at Facebook relies on a privileged network that is out of reach for the vast majority of Facebook users, Burrell noted.
“That is the mode of all of these big platforms, to not have customer service, to not have someone you can contact if you have a problem,” Burrell said. She added that no regulations or policies exist yet that require real humans to step in when AI gets it wrong.
In September, the White House unveiled a “Blueprint for an AI Bill of Rights,” which outlines five protections Americans should have in the artificial intelligence age. One of those protections is a human alternative, which states, “People should always be able to opt out of AI systems in favor of a human alternative and have access to remedies when there are problems.” At this stage, the blueprint is a set of unenforceable recommendations.
Wright is relieved to be back on track with the groups he founded. The outpouring of support he received from members while the future of the groups was uncertain reminded him of why he started the online history communities in the first place.
However, the problems he has with Facebook’s content moderation systems remain.
“It’s not only taking the fun out of it, but it’s exasperating,” Wright said.