Meta’s Oversight Board takes up permanent bans in landmark case

Meta’s Oversight Board is examining a case focused on the company’s ability to permanently disable user accounts. Permanent bans are a drastic action. They lock people out of their profiles, memories, and friend connections. For creators and businesses, a ban also cuts off their ability to market and communicate with fans and customers. This marks the first time in the board’s five-year history that permanent account bans have been a subject of its focus.

The case under review involves a high-profile Instagram user who repeatedly violated Meta’s Community Standards. The user posted visual threats of violence against a female journalist, anti-gay slurs targeting politicians, content depicting a sex act, and allegations of misconduct against minorities. The account had not accumulated enough strikes to be automatically disabled, but Meta made the decision to permanently ban it.

The board’s materials did not name the account, but its recommendations could impact others who target public figures with abuse, harassment, and threats. It could also affect users who receive permanent bans without transparent explanations.

Meta referred this specific case to the board, which included five posts made in the year before the account was disabled. The company is seeking input on several key issues. These include how permanent bans can be processed fairly, the effectiveness of current tools to protect public figures and journalists from repeated abuse and threats, the challenges of identifying off-platform content, whether punitive measures effectively shape online behaviors, and best practices for transparent reporting on account enforcement decisions.

The decision to review this case follows a year of user complaints about mass bans issued with little information about the violations. This issue has impacted Facebook Groups as well as individual account holders who believe automated moderation tools are to blame. Those who have been banned have also complained that Meta’s paid support offering, Meta Verified, has proven useless in these situations.

Whether the Oversight Board has any real sway to address issues on Meta’s platform continues to be debated. The board has a limited scope to enact change at the social networking giant. It cannot force Meta to make broader policy changes or address systemic issues. Notably, the board is not consulted when CEO Mark Zuckerberg decides to make sweeping changes to company policies, such as last year’s decision to relax hate speech restrictions.

The board can make recommendations and can overturn specific content moderation decisions, but it can often be slow to render a ruling. It also takes on relatively few cases compared to the millions of moderation decisions Meta makes.

According to a report released in December, Meta has implemented 75% of more than 300 recommendations the board has issued, and its content moderation decisions have been consistently followed by Meta. The company also recently asked for the board’s opinion on its implementation of the crowdsourced fact-checking feature, Community Notes.

After the Oversight Board issues its policy recommendations to Meta, the company has 60 days to respond. The board is also soliciting public comments on this topic, but these submissions cannot be anonymous.