Facebook is banning white nationalism and white supremacy from its social network following criticism that it had not done enough to eliminate hate speech on its platform.
The social media giant said in a blog post Wednesday that conversations with academics and civil rights groups convinced the company to expand its policies around hate groups.
“Today we’re announcing a ban on praise, support and representation of white nationalism and separatism on Facebook and Instagram, which we’ll start enforcing next week,” the company wrote in the post. “It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”
Scrutiny of Facebook reached new heights in the past two weeks after a gunman in Christchurch, New Zealand, used Facebook to livestream his attacks on two mosques that killed 50 people.
Under Facebook’s change, people who search for terms associated with white supremacy will instead see a link to the page of Life After Hate, a nonprofit that helps people to leave hate groups, the company said.
The change was first reported earlier Wednesday by Vice Media’s tech publication Motherboard, which had previously found that Facebook’s policies banned white supremacy but allowed white nationalism and white separatism.
The social network, which counts more than 2 billion users, has faced growing criticism over its decisions about what kinds of speech to take down and what to leave up, and it maintains a sprawling book of rules that employees use to decide what to censor.
“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism,” Facebook said in its post.
Facebook has previously taken action in the wake of race-based violence, removing links to a white supremacist website and taking down a page used to organize the “Unite The Right” rally in 2017.
Color of Change, an advocacy group that has called on technology companies to do more to fight racial hatred, called Facebook’s decision a “critical step forward.”
“Facebook’s update should move Twitter, YouTube, and Amazon to act urgently to stem the growth of white nationalist ideologies, which find space on platforms to spread the violent ideas and rhetoric that inspired the tragic attacks witnessed in Charlottesville, Pittsburgh, and now Christchurch,” Rashad Robinson, president of Color of Change, said in a statement.
A Twitter representative Wednesday declined to say whether the company was considering adopting a similar change. Amazon and YouTube did not immediately respond to requests for comment.
Twitter does not explicitly ban white nationalism, though its rules tell users they may not affiliate with organizations that “use or promote violence against civilians.” Its rules also prohibit the use of “hateful images or symbols” in profile images, and Twitter says it enforces those policies vigorously and regardless of ideology.
Facebook has moved in recent years to expand its policies and take action against hate speech and misinformation, and the company has begun to enlist outside perspectives in its decision making.
On Tuesday, Ime Archibong, Facebook’s vice president of product partnerships, revealed some details about a new oversight board that the company is forming to provide guidance on its “most challenging and contentious content decisions” and “hold us publicly accountable if we don’t get them right.”
“The board, as currently envisioned, will consist of about 40 global experts with experience in content, privacy, free expression, human rights, journalism and safety,” Archibong wrote in a blog post. “Where we need to, we will supplement member expertise through consultation with geographic and cultural experts to help ensure decisions are fully informed.”
Some U.S. lawmakers want the federal government to do more to learn about the relation between online extremism and hate crimes. House and Senate Democrats including Sen. Bob Casey, D-Pa., have introduced legislation that would require the Justice and Commerce departments to study how people are using the internet to fuel hate crimes.
Anne Speckhard, the director of the International Center for the Study of Violent Extremism, or ICSVE, told NBC News that recommending anti-hate groups is a new more for Facebook.
Speckhard studies counter-extremism, specifically how groups like ISIS recruit using the internet, and said that Google had taken similar steps to favor anti-extremist content when at-risk users searched terms that would typically lead them to ISIS propaganda.
“We know [Facebook] has been working nonstop on it,” Speckhard said. “The truth is, far-right hate speech is a lot harder to identify than ISIS films. People use humor. They’re more witty about it than ISIS has been.”
Facebook funds ICSVE’s documentary project, titled “Breaking the ISIS Brand,” which has created more than 100 videos showing ISIS members disillusioned by their cause. The videos are meant to be shared on social media
Domestic extremism researchers largely praised the news. Becca Lewis, who studies white supremacy on social media for nonprofit technology research organization Data & Society, called Facebook’s announcement “a huge step in the right direction and one that is cause for cautious optimism.”
“For years, Facebook has tiptoed around the issue of white supremacy on its website, which has ultimately allowed it to thrive there, mostly unchecked,” Lewis said. “These steps suggest that the platform may finally be taking the issue more seriously than it has in the past.”
Lewis also said it’s important to “keep the pressure on platforms to follow through” on the changes, as social media companies have not always made good on maintaining their platforms after initial announcements.
“At the same time, platforms have made a habit of releasing PR statements about changes they plan to make, and then ultimately not following through in meaningful ways,” Lewis said.