Pictured here is Professor Marcus Munafo standing at a podium in the front of the Psychology Lecture Theatre. The lecture slide contains the title “Why Science is not Necessarily Self-Correcting,” from the paper by John P. A. Ioannidis in 2012.
Research Ecosystems and Research Quality: RRCam Lecture by Marcus Munafo
05 March 2026
On 13 February 2026, Professor Marcus Munafò, Executive Director of the UK Reproducibility Network, delivered a thought-provoking lecture titled “Research Ecosystems and Research Quality” as part of the Reproducible Research Cambridge community events series. The session challenged us to examine not only how research is conducted, but how the structures surrounding research shape the quality of the knowledge we produce.
At the heart of the lecture was a simple question: what enables research to be self-correcting? Replication studies, openness to admitting error, and transparent reporting are often cited as pillars of scientific progress. Yet, as Professor Munafò pointed out, these ideals do not always align with human nature or with the incentives embedded in academic systems. Admitting we are wrong is difficult; funding and promotion structures often reward novelty over correction.
He highlighted how research can mistakenly elevate false positives. We are predisposed to see patterns, even when none exist, recalling the famous “face on Mars” as a reminder of our tendency to impose meaning on noise. In principle, the scientific method is designed to guard against this. In practice, however, the actual method researchers adopt is shaped by institutional incentives: publication counts, citation metrics, and career progression criteria. Null results, for example, are less likely to be cited. Yet citation metrics heavily influence evaluation, hiring and promotion. This creates a structural imbalance: researchers are incentivised to produce positive, striking findings rather than careful confirmations or refutations.
Professor Munafò also invited us to reconsider the format of scientific communication itself. The structure of journal articles has changed remarkably little in 400 years. In an age of digital infrastructure, do we still need journals in their traditional form? Emerging platforms such as Octopus offer alternative models that break research outputs into discrete components allowing contributions to be published incrementally and transparently. Rather than focusing solely on detecting errors after publication, Professor Munafò emphasised improving systems so that errors are less likely to occur in the first place. Automation, structured data practices, early-stage review, and open science frameworks all play a role.
Ultimately, the lecture reframed research quality as an ecosystem issue rather than an individual one. Scientists operate within structures that shape behaviour. If we want more replication, greater transparency, and fewer false positives, we must align incentives with those goals. Research will only be truly self-correcting when the systems around it reward correction, rigour and openness, not just novelty and citation counts.
