Bioethics
Intelligent Design
Science Reporting
How the “Scientific Community” Undermines Its Own Trustworthiness

Leading spokespersons for science are complaining about the decline of public trust in their craft. They don’t need anyone to explain why this is happening. Their own journals contain the evidence. It’s time for a scientific revolution: an ethical revolution. Scientists need to clean up their act. Listen to their own confessions.
Peer Review: The Collapse Collapses
One of the strongest props for scientism has been peer review. This practice is meant to promote objectivity and accuracy in scientific publishing and weed out fraud and pseudoscience. The alleged lack of peer review has been used to bash design advocates (here). Inside the sausage factory of peer review, though, the atmosphere stinks, as many scientists know. Complaints against the practice have been increasing in intensity in the last two decades. Denyse O’Leary reported last year that it may be beyond reform.
To add insult to injury, Alexander Goldberg and six colleagues found that “peer reviews of peer reviews” are similarly flawed. Their randomized controlled test of studies that rank the effectiveness of peer reviews, published in PLOS ONE, found stink all the way up:
In this work, we analyze the reliability of peer reviewing peer reviews. We find that many problems that exist in peer reviews of papers — inconsistencies, biases, miscalibration, subjectivity — also exist in peer reviews of peer reviews. In particular, while reviews of reviews may be useful in designing better incentives for high-quality reviewing and to measure effects of policy choices in peer review, considerable care must be taken when interpreting reviews of reviews as a sign of review quality. [Emphasis added.]
This implies, of course, that we cannot even trust the work of Goldberg and team in this paper. Who peer reviewed this “peer review of peer reviews” anyway? Only fallible humans, often too lazy to take “considerable care” when seeking the truth about a matter.
Procuring input from peers makes common sense, but such is not unique to science. Historians, teachers, and artists know the value of seeking wise advice from knowledgeable peers. To treat peer review in science as a reliable means to accuracy is naïve. The practice relies on the ethics of fallible humans, varies in its methodology from one field to another, and is subject to perverse incentives. Predatory journals have been on the rise, pretending to be peer reviewed but with low standards. Besides, many of the most epochal ideas in science were not peer reviewed. Principia, anyone?
Hiding Evidence
The “file drawer problem” leads invariably to biased reporting. It refers to scientists deciding not to report negative results. The silence creates an impression that research is making progress. This issue was brought to attention in 2014 by Franco et al. in Science, before the Open Science movement gained momentum.
Philip Moniz et al. stated in PNAS on March 2, “The file drawer problem — often operationalized in terms of statistically significant results being published and statistically insignificant not being published — is widely documented in the social sciences.” With two colleagues, Moniz sought to extend Franco’s work.
We examine projects begun after Franco et al. The updated period coincides with the contemporary open science movement. We find evidence of the problem, stemming from scholars opting to not write up insignificant results. However, that tendency is substantially smaller than it was in the prior decade. This suggests increased recognition of the importance of null results, even if the problem remains in the domain of survey experiments.
This sounds like a blindfolded person in Blind Man’s Bluff acting as his own caller: “I’m getting warmer.” How do the three know that the progress is being made in solving the file drawer problem, when they used flawed “survey experiments” themselves to come to that conclusion? They paid respondents to answer their questions. How objective is that? Some day in the future they may figure out the extent of the problem.
The file drawer problem we document appears to stem from researchers’ choices to not write up or submit null results. We cannot dismiss the possibility that those decisions correlate with other parts of the studies, meaning that they would not have been accepted if written/submitted. This highlights the importance of future work looking at other study aspects such as sponsors, the contribution relative to prior work, and so on, as well as variation in journal processes and prestige…. As null results become more acceptable, these types of factors may become more influential in the publication process.
Notice what “these types of factors” have in common: human fallibility. Scientists chose not to write up or submit null results, biasing the corpus of scientific literature toward the illusion of progress. Don’t imagine that this problem is limited to the social sciences. It’s human nature to accentuate the positive and eliminate the negative.
The Gollum Grasp
There’s another human foible messing up trust in science: the “Gollum Effect.” “It’s mine! My precious data.” This foible goes back centuries, leading to historic priority battles over scientific discoveries. But isn’t science supposed to rise above petty squabbles for the good of the world? Maybe in idyllic poetry it is, but often not in practice. Researchers at German universities in Wittenberg and Leipzig state that “the ‘Gollum effect’ hinders research and careers.” They begin with the Pollyana word “should” —
Scientific research should serve the good of mankind, which is why data is not kept under lock and key and findings are shared openly. “Unfortunately, the academic world does not always live up to this ideal. Possessiveness, exclusion, and the hoarding of data, resources and ideas are widespread issues,” explains Dr Jose Valdez, a biodiversity researcher at MLU and iDiv. The phenomenon is known as the “Gollum effect”, a term coined by the researchers themselves and inspired by the tragic character in “The Lord of the Rings” — a figure so fixated on a magic ring that his obsession pulls him into the abyss. “In science, possessive behaviour undermines scientific progress and disproportionately impacts early-career and lessed establish researchers,” says Valdez.
In a survey of 563 scientists in 64 countries, half the respondents said they “had experienced the phenomenon themselves.” But didn’t we just learn that survey experiments suffer from lower credibility?
The consequences of the Gollum effect can be serious. Over two thirds of those surveyed reported significant career setbacks. Many were forced to abandon their research topics, change research groups and institutes, or even leave science altogether. Only a third of those had experienced the Gollum effect said that they took any actions to defend themselves. Nearly a fifth of respondents even admitted to likely having displayed Gollum-like behaviour themselves.
And Now This
Just when these problems were shattering trust in scientism, this appeared: AI fakery. Nature warns, “Fake AI images will cause headaches for journals.” Even experts and peer reviewers have been demonstrably fooled by AI-generated images of tissues, tests have shown. Ralf Mrowka in Germany looked into methods of generating fake images.
The fabrications have reached a level of quality “that will give editors, reviewers and readers a hard time”, he says. “Once I have trained an AI model to generate this type of data, I can generate endless new fake images” and also make modifications to them, he adds.
Nature reporter Dalmeet Singh Chawla said, “The race is on to build tools that can detect fraudulent AI-generated images in research.” Perhaps they will succeed, but this may be an ongoing spy-vs-spy saga as engineers try to design better tools for distinguishing authenticity from fakery.
But it gets worse. Nature worries that “AI-generated literature reviews threaten scientific progress.” AI is getting so good it can generate fake scientific papers, graphics and all, that can fool reviewers. What’s more, even the reviewers can be fake! What’s coming next? Will humans turn over science to humanoid robots? Built and trained by fallible humans, would they not reflect the same fallibility of their creators?
Ethics for Me but Not for Thee
With hands over their self-righteous hearts, Christine Coughlin and Nancy M. P. King of Wake Forest University write at The Conversation that “Science requires ethical oversight” — after which they launch into a political tirade against the current U.S. administration for cutting NIH funds they deem necessary to prevent research abuse. It’s odd they had nothing to say about ethical abuses at the NIH under the prior administration (see here, here, and here).
Regardless of one’s political view, it must seem highly out of line for institutional science to be wholeheartedly supportive of one party and critical of the other, something I witness almost daily in my scouring of science headlines. This, I feel, is the major reason for the declining trust in science. Political partisanship damages the reputation of science and its ideal of objectivity.
The Solution: Integrity
None of us knows everything. We need the freedom to share ideas and debate them. That is why groupthink hinders science. As long as the nebulous “scientific community” acts politically partisan and censors skeptics of Darwinism and materialism, it will continue to lose public trust. Open debate can help bring our fallibilities to the surface, where they can be exposed and corrected. Each interlocutor, though, must value integrity above all.
Other writers here at Evolution News (Klinghoffer, Egnor, Gauger) have spoken out on the decline of integrity in science. Integrity means valuing truth over self. It means being honest with one’s own work when alone, being willing to stand against power and groupthink, and being willing to admit failure even when it costs. Integrity is a very un-Darwinian trait because it can decrease one’s fitness in a crazy world of liars and temptations.
Integrity is also un-Darwinian in that it cannot evolve. If shoved into a Hegelian triad it might theoretically evolve into its own antithesis, collapsing in a heap of confusion. If science departments in our schools and universities return to an emphasis on integrity in every course, some alert students may reason that integrity cannot be a product of natural selection.
Integrity by its nature is eternal. Knowledge can accumulate, but integrity must stand rock solid against the winds of change. Scientists have undermined their own credibility of late, but whenever integrity is nurtured and takes root in science, it will produce the fruit of trustworthiness.