
“Just break them up. Break them up in the name of the rule of the people. For the good of the American people and our liberty, we need to break those corporations up and cut them down to size.”
Statements like this one, muttered by Senator Josh Hawley at CPAC, reflect how a growing number of Republicans are seeking to undermine companies’ power to moderate content by drastically expanding the government’s authority to intervene in the internal affairs of private organizations. Facebook and Twitter’s initial suppression of a New York Post story on Hunter Biden just before the 2020 presidential election is evidence of bias against conservatives, but rather than risk being broken up by potentially overzealous elected officials, will tech companies develop the institutions necessary to shield themselves from regulations?
In the wake of the 2021 Capitol riots, House Democrats sent letters to cable tv providers and social media companies inquiring about their responsibility in spreading “misinformation” about the election results. This comes with the unspoken implication that, absent changes to the content those companies choose to carry or allow, they might be subject to legislative action, whether in the form of speech restrictions or being broken up by legislative fiat.
Under fire from both sides on the issue of content moderation for some time now, Facebook has attempted to stave off regulatory attempts from across the political spectrum by launching the Oversight Board, an independent institution created to review content removal decisions on the platform.
Functioning roughly analogously to the U.S. Supreme Court, the Oversight Board will review a selection of cases, all of which must first go through Facebook’s internal appeals process first — making it a sort of court of final appeals for content moderation. In another parallel to the US Supreme Court’s decisions, decisions by the Oversight Board will serve as precedents on similar cases. Additionally, Facebook can also request non-binding advisory opinions from the Oversight Board before it makes a decision on whether or not to remove flagged content. The Oversight Board’s 20 members, vetted to provide diversity of insight, are selected without input from Facebook and include prominent professors, human rights activists familiar with the consequences of controlled speech, a former Danish prime minister, and even a vice president of the Cato Institute, a libertarian think tank.

As brand-conscious companies, social media giants are naturally inclined to make choices that minimize harm to their image (and stock price), which often leads to ad-hoc decisions to censor content which ultimately suppresses free speech on their platforms. While these decisions protect shareholder value in the short term, the cumulative effect of these decisions has been to build so much voter backlash that the very existence of these social media giants as they stand is threatened.
Giving an independent body a final say turns over at least part of their responsibility, deflecting blame and anger from them. If anyone accuses companies like Facebook of removing too much or too little, the companies could point to bodies like the Oversight Board as the culprits. Thus, if users know they can appeal the removal of their content to an independent-enough institution, their ability to demand political action could be muted. In the best case, this would revive greater confidence in social media, and in the worst case, such institutions would only serve as rubber stamps for existing (and often biased) policies.
This is important because though social media platforms are private companies, to their billions of users, they’re more than just a website on the internet. They’re a place for public discourse, a place for connecting not just with family and friends but millions of people across the world. By delivering content appeals to a more independent organization, social media companies may very well be able to preserve the internet as a valuable public forum and pre-empt harmful actions like the potential repeal of Section 230.
The Oversight Board has already decided on several cases, often overturning content removals. For example, one case centered around a quote from Nazi Propaganda Minister Joseph Goebbels about arguments appealing to emotions and instincts. Facebook had initially removed the post for its violation of its community standard on “Dangerous Individuals and Organizations,” a list of those one cannot promote or quote on Facebook. On the other hand, the user insisted he used the quote in a negative light to draw a comparison to President Trump’s communications style.
The Oversight Board not only ruled in favor of the affected user, affirming that he did not support Nazi ideology in sharing the post, but reprimanded Facebook for its the opacity of its “Dangerous Individuals and Organizations” policy and list for not being “clear, precise and pubicly accessible; the body’s advisory opinion condemned the list for being unavailable to the public and the rules as being too broadly defined.
From these initial actions, the Oversight Board appears to show a willingness to overrule Facebook and stand up for freedom of expression. While Facebook does fund the Oversight Board, make the basic rules upon which its rulings are based and does have say over its composition, these first rulings suggest this new body intends to serve as an impartial, effective arbitrator. If the Oversight Board continues to judge with restraint and independence, it could exist as a model for viable, private alternatives to the government getting more involved in rule-making and internal affairs. And that’s a win for everyone.
Sebastian Thormann is a Young Voices Contributor, a student at the University of Passau, Germany, and a columnist at the Lone Conservative (US). He has also been published in the Washington Examiner and Townhall.com.