Free Speech Icon Free Speech
Intelligent Design Icon Intelligent Design

PNAS Paper: “Scientific Censorship Appears to Be Increasing”

free speech
Photo credit: Alem Omerovic, via Unsplash.

A recent open-access paper in Proceedings of the National Academy of Science argues that, “Prosocial motives underlie scientific censorship by scientists: A perspective and research agenda.” The 39 named authors include Glenn Loury, John McWhorter, and Steven Pinker. The article explores the nature of censorship in the scientific community today. While it sometimes exhibits its own ideological biases, it nonetheless makes many valid points about the growing dangers of scientific censorship.

The paper defines censorship as “actions aimed at obstructing particular scientific ideas from reaching an audience for reasons other than low scientific quality.” It opens by refuting myths about the most famous incident of censorship against a scientist — the belief that the infamous Galileo affair was simply a case of the Catholic Church persecuting a scientist. While the Church did play its role, the paper recounts that many academic peers of Galileo who opposed his ideas had pushed for censoring Galileo and punishing him:

Historical surveys of science often contrast a superstitious and illiberal past with enlightened modernity. Galileo’s defense of heliocentrism is rehashed in modern textbooks, albeit not entirely accurately. Although the Church ultimately sentenced Galileo, his persecution was driven primarily by Aristotelian professors who appealed to the Church’s authority to punish him. In 1591, Galileo’s contract was not renewed at University of Pisa, and after enduring hostility from peer professors, he left academia, apparently viewing it as hopelessly unscientific.

Unfortunately, if this paper is correct, then the scientific censorship of Galileo’s time has not been left entirely in the past. 

Censorship Is Growing

One of the most important and troubling findings of the paper is that censorship is not only common but growing:

Despite the challenges of detecting censorship, recent attempts to quantify the issue have concluded that censorship motivated by harm concerns is common. Hundreds of scholars have been sanctioned for expressing controversial ideas, and the rate of sanctions has increased substantially over the past 10 y. … Scientific censorship appears to be increasing.

Why is this happening? Because it’s precisely what academics want: 

Surveys of US, UK, and Canadian academics have documented support for censorship. From 9 to 25% of academics and 43% of PhD students supported dismissal campaigns for scholars who report controversial findings, suggesting that dismissal campaigns may increase as current PhDs replace existing faculty. Many academics report willingness to discriminate against conservatives in hiring, promotions, grants, and publications, with the result that right-leaning academics self-censor more than left-leaning ones.

The paper documents various groups and demographics that are more likely to support or experience academic censorship. 

Hard vs. Soft Censorship

The intelligent design (ID) community is well aware of the problem. One section in the paper that categorizes different types of persecution in science sounds alarmingly familiar. They identify two types of censorship: “hard” and “soft.” While “hard” censorship is easy to recognize, “soft” censorship is also on the rise:

Hard vs. Soft Censorship. Hard censorship occurs when people exercise power to prevent idea dissemination. Governments and religious institutions have long censored science. However, journals, professional organizations, universities, and publishers — many governed by academics — also censor research, either by preventing dissemination or retracting post-publication. Soft censorship employs social punishments or threats of them (e.g., ostracism, public shaming, double standards in hirings, firings, publishing, retractions, and funding) to prevent dissemination of research. Department chairs, mentors, or peer scholars sometimes warn that particular research might damage careers, effectively discouraging it. Such cases might constitute “benevolent censorship,” if the goal is to protect the researcher.

This division resonates with my own experience defending ID-friendly scientists. At times we have seen “hard censorship” against pro-ID scientists who have produced quality research that was rejected or muzzled for non-academic reasons, including retraction of papers after they had already been accepted. And course we should not forget the time that a U.S. federal court banned intelligent design from public schools — an action that collaterally inspired widespread persecution of ID scientists.

But we’ve also seen much soft censorship, where an atmosphere of fear and intimidation against ID-friendly scientists prevents people from even trying to talk about their views. It takes just a few cases of discrimination to create such a climate — and we have seen plenty over the years. The paper lists additional examples of censorship that are quite familiar to us:

A second class of censors includes institutions: universities, journals, and professional societies. Individuals backed by institutional power may censor unilaterally. Deans and department heads can withhold resources or denounce scholars who forward controversial claims. Tenure makes it difficult to fire professors but offers little protection from other punishments, and academics are increasingly nontenure track. Professional societies can expel, sanction, or censure members for sharing unpopular empirical claims and journal editors can reject or retract controversial articles. 

A third class exerts influence informally. Faculty members can ostracize and defame peers, pressuring them into self-censorship. Ostracism and reputational damage may seem trivial compared to historical forms of censorship, but humans value and depend on positive reputations, and people report a preference for various physical punishments over reputational damage. Even threats of denunciation are sufficient to deter scientists from pursuing unpopular conclusions they believe to be true. Facing backlash, some scholars have retracted their own papers even when they identified no errors. Institutions also fear reputational (and financial) damage, and so, individuals inside and outside academia can use whisper campaigns and social media to pressure institutions to censor, and wealthy donors can threaten withheld funding to do so. Reviewers can recommend rejection of papers or grant applications they regard as morally distasteful. Some scholars even advocate for morally motivated rejections.

Now each of the above seem to be an intentional, conscious form of censorship. But sometimes the censorship can be unintentional — the result of unconscious bias. The paper identifies peer review as a possible case in point: 

Censorious reviewers may often be unaware when extrascientific concerns affect their scientific evaluations, and even when they are aware, they are unlikely to reveal these motives. Editors, reviewers, and other gatekeepers have vast, mostly unchecked freedom to render any decision provided with plausible justification. Authors have little power to object, even when decisions appear biased or incompetent. … Yet, peer review itself is susceptible to bias. Editors and grant panels, often aware of well-known scientists’ inclinations, can select reviewers who share their own preferences. Because nearly all science is imperfect, peer review can obfuscate biases by cloaking selective, arbitrary, and subjective decisions in seemingly meritocratic language.

This is spot-on: While the peer-review process is arguably necessary to ensure that published scientific claims are well supported, it nonetheless is subject to bias, unscientific motives, and related factors. I’ve written about problems with the peer-review system in the past. See, “Intelligent Design Is Peer-Reviewed, but Is Peer-Review a Requirement of Good Science?” 

Benevolent Censorship?While we’re familiar with both the malicious and the unconscious varieties, the paper argues that there can be “prosocial” or “benevolent” reasons for engaging in censorship. This is concerning, but they point out that it’s a reality for many people in how they view censorship:

But censorship can be prosocially motivated. Censorious scholars often worry that research may be appropriated by malevolent actors to support harmful policies and attitudes. Both scholars and laypersons report that some scholarship is too dangerous to pursue, and much contemporary scientific censorship aims to protect vulnerable groups. Perceived harmfulness of information increases censoriousness among the public, harm concerns are a central focus of content moderation on social media, and the more people overestimate harmful reactions to science, the more they support scientific censorship. People are especially censorious when they view others as susceptible to potentially harmful information.

However, calling some forms of censorship “benevolent” or “prosocial” is highly subjective. The vast majority of people engaging in censorship would probably argue that they are protecting the public or some marginalized group from dangerous ideas. The censors see themselves as “benevolent” or “prosocial.” In fact, I’d bet that most censors throughout history have seen themselves as the good guys. But ask those being censored or persecuted, and you’ll get a very different perspective. 

That means terms like “prosocial censorship” are basically meaningless — in any censorship dynamic some will feel their actions are “benevolent” while others will view it as unfair and “malevolent.” The label we slap on forms of censorship appears to be entirely in the eye of the beholder. 

The Costs of Censorship

But even if we could justify some forms of censorship, isn’t there always something lost? The authors of this paper think so: “there is at least one obvious cost of scientific censorship: the suppression of accurate information.” 

Another cost is the public’s trust in science:

Scientific censorship may also reduce public trust in science. If censorship appears ideologically motivated or causes science to promote counterproductive interventions and policies, the public may reject scientific institutions and findings.

Indeed, in our era with its wide range of media, it’s a lot harder for the censors to succeed. The rise of the Internet means it’s virtually impossible to fully censor scientists, who will always have means of spreading their ideas. If the public finds that scientific ideas that were censored are nonetheless credible and supported by evidence, that is devastating for trust in science:

Censorship may be particularly likely to erode trust in science in contemporary society because scientists now have other means (besides academic journals) to publicize their findings and claims of censorship. If the public routinely finds quality scholarship on blogs, social media, and online magazines by scientists who claim to have been censored, a redistribution of authority from established scientific outlets to newer, popular ones seems likely.

And it’s not just the public that’s affected — scientists themselves can lose faith in science. Talented scientists who are fed up with the lack of academic freedom may leave the scientific community altogether. The paper puts it this way: “If particular groups of scholars feel censored by their discipline, they may leave altogether, creating a scientific monoculture that stifles progress.”

A Simple Solution

In the end, the paper recalls that “the pursuit of knowledge has a strong track record of improving the human condition.” This is an apt reminder that seeking and finding the truth is more important than being hailed as “right.” 

Of course, the pursuit of knowledge itself has necessary ethical limits. Nazi doctors infamously performed horrific experiments on prisoners and people deemed “inferior.” Was knowledge gained? Probably yes. But the torture inflicted upon fellow humans could never justify whatever scientific benefits were reaped. 

Yet the pursuit of knowledge must not be bridled unnecessarily. The stats cited in this paper show that censorship is supported and perpetrated by a large proportion of the scientific community against a significant minority in it. This censorship has been exposed more and more, and the public is losing trust in science — just as the paper warns. 

There’s an easy solution to this problem but it’s one that many censors in the scientific community simply cannot consider. Here it is: Allow people to research and promote ideas that you disagree with.

The censors won’t allow this because they aren’t really seeking truth — they’re seeking to impose an agenda and maintain power structures that suppress ideas that run counter to that agenda. Until that changes, the problem won’t going away — it will only get worse.