This month sees the launch of Facebook’s new ‘Oversight Board’ – an exciting development after years of disputes over Facebook’s content moderation decisions. The company’s attempts to self-regulate are often called into question by the lack of transparency around interpretation of policies – in particular, why some offensive posts are removed and not others. The Board’s creation is, in theory, an attempt to address this.
Facebook’s VP of Global Affairs and Communications, and former Deputy Prime Minister, Nick Clegg has hailed the creation of the Oversight Board as a step which will “make Facebook more accountable and improve our decision-making”. On the other side, activists who have created a rival, unofficial “Real Oversight Board” have criticised the ‘official’ Oversight Board as being “weak and insufficient” and a “PR vehicle”. Critics have also highlighted the fact it will probably not be ready to issue decisions before the 2020 US Presidential elections.
So what exactly will Facebook’s Oversight Board do, and is it a step in the right direction for better regulation?
What is the Oversight Board?
The Oversight Board will initially be made up of 20 individuals in the spheres of politics, academia and journalism, including former Guardian editor Alan Rusbridger and former Prime Minister of Denmark, Helle Thorning-Schmidt. When fully staffed, the board will ramp up to 40 members, with members serving up to three terms of three years in length. These are paid posts – although Facebook has not disclosed how much member will be paid, or how this is determined. They have, however, stated that the Oversight Board will be independent of Facebook’s executive and staff.
The Board will have power to adjudicate over content moderation decisions made by Facebook in respect of complaints or violations of its Community Standards – the standards to govern what content is allowed on the platform. These Standards enable offensive content which may not cross the line in terms of being unlawful - e.g cyber-bullying and fake news - to be removed on the basis it is harmful to society. The Oversight Board will have the power to reverse decisions made by Facebook’s moderators.
Process and powers
According to the Oversight Board’s bylaws, the purpose of the Board is to “protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Facebook’s content policies”.
A previous draft of the bylaws empowered the board only when moderators had removed content, not where moderators allowed content to remain on Facebook. This was roundly criticised – unsurprisingly so, given many of Facebook’s most high-profile controversies relate to refusal to remove controversial content.
As a result, the Oversight Board now has the power to review all decisions “in instances where people disagree with the outcome of Facebook’s decision and have exhausted appeals”. Facebook’s moderators can also submit requests for review.
According to the Charter, the Board can choose which requests to review, and should prioritise cases that “have the greatest potential to guide future decisions and policies.” The concept is that the board should set precedents through their decisions that Facebook’s moderators can use for future guidance.
That said, the Board does not have the power to take cases “where the board’s decision on a case could result in criminal liability or regulatory sanctions”. Given the scope of regulations such as the GDPR, this could potentially exclude a lot of cases from the Board’s remit.
The timeframe for case decisions and implementation is a maximum of 90 days starting from Facebook’s last decision on the case under review. Cases will be considered by board panels of five members.
Does it have teeth?
The fact remains that, despite the Board’s creation, Facebook’s processes are still largely unclear. On the plus side, the creation of the Oversight Board will offer some clarity to anyone wishing to complain about Facebook’s content. The fact that its decisions have precedential value will also give complainants a better sense for whether a complaint is likely to succeed or fail.
But there is still a long way to go. Facebook is often criticised for being slow to respond to complaints – and a 90 day timeframe for case decisions is little comfort to those damaged by false allegations or misinformation. For example, Facebook recently has been criticised for its delay in acting on complaints about a militia group which was linked to a shooting in Kenosha, Wisconsin.
Ninety days is also a long time in the in the context of a national election. Delays in making decisions about political posts could mean the decisions in question have very little impact. This is comparable to IPSO’s failure to reprimand The Sun for its ‘Queen Backs Brexit’ headline until after the 2016 EU referendum. It is also possible that the Oversight Board will interpret its remit narrowly, and decline to rule on decisions where there is an outside chance that this could attract the attention of regulators or criminal authorities.
Whilst the Oversight Board may shine a weak light on Facebook’s decision-making process, further steps are still needed to create more effective regulation.