White House Press Secretary Jen Psaki last week made a startling revelation from the White House press podium: that the major social media platforms take direction from the government in deciding what content to suppress, amplify, or remove.
On Thursday, Psaki casually made note of the fact that the White House was working in coordination with Facebook, flagging specific “problematic” posts for COVID-19 “misinformation.” She was joined by Vivek Murthy, the U.S. surgeon general, whose office released a 22-page guidance urging platforms to “impose clear consequences for accounts that repeatedly violate platform policies.” Facebook later confirmed it is involved in “private exchanges” with the Biden administration on how to manage COVID-19 information on the platform.
What could have potentially been defended as a well-meaning effort to work with major speech outlets to combat certain inaccuracies about the efficacy of vaccines, however, quickly progressed beyond that. By Friday, the White House was pressuring companies to work together to ban users across multiple platforms. Efforts to ban “misinformation” about the COVID-19 vaccine, meanwhile, had evolved into banning “the latest narratives dangerous to public health.”
The problem with all of this, of course, is that the definition of misinformation is constantly changing to meet the needs of the powerful—whether that is the political needs of the party in charge, or the political or financial self-interest of the platforms.
Psaki’s revelation, as startling as it was, is clarifying. It remains a contested point in the debate over Big Tech whether these companies constitute “private enterprise” or if they’ve reached the level of indispensable services. But the Biden administration’s flippant acknowledgement that control of what is said on Facebook is central to their policy goals points toward the true status of these companies as essential corridors of speech.