Vox’s Kelsey Piper discusses the replication crisis in “Future Perfect" this week, elaborating on Alvaro de Menard’s (probably a nom-de-blog) recent post What’s wrong with social science and how to fix it…, in the context of other Vox coverage of replication and science.
It’s a good piece. If you’ve read Menard you won’t learn a lot, but it’s got the Vox flair and reaches a wider audience. It also adds some more context, including previous Vox articles.
Two and a half corrections:
- It looks like Piper conflated Replication Markets with the larger DARPA SCORE program that funds us. We are just one part.
- Piper neglect Menard’s caveats: his post is written based on the forecasts of Replication Markets. Actual replication results are just arriving. Past markets have been about 70% accurate on average, so there is some wiggle room.
- Even replication itself is a noisy process. With hundreds of replications now, we are pretty sure only about half of claims replicate, but the result for any single claim could be a chance mistake.
A couple of quotes
The problem (summarizing Menard):
If scientists are pretty good at predicting whether a paper replicates, how can it be the case that they are as likely to cite a bad paper as a good one? Menard theorizes that many scientists don’t thoroughly check — or even read — papers once published, expecting that if they’re peer-reviewed, they’re fine. Bad papers are published by a peer-review process that is not adequate to catch them — and once they’re published, they are not penalized for being bad papers.
The problem (from a 2016 Vox piece):
We heard back from 270 scientists all over the world, including graduate students, senior professors, laboratory heads, and Fields Medalists. They told us that, in a variety of ways, their careers are being hijacked by perverse incentives. The result is bad science.
A few of the articles linked by Piper
Previous Vox pieces
- Resnick’s 2017 article about p-values and the .005 suggestion.
The real problem is the culture of science.
- Vox Belluz, Plumer, & Resnick 2016 “The 7 biggest problems facing science…”.
Some of the papers on forecasting replications:
- Camerer et al. 2018 in Nature: markets were about 86% accurate in predicting these replications (and about 70% overall including three previous studies).
- Altmejd et al. 2019 in PLOS ONE: predicting the previous social science replications using statistical models, about 70% accurate (AUC of 0.77), based on simple features like effect size, and whether the result was an interaction effect.
- Hoogeveen, Sarafoglou, Wagenmakers 2020 in Advances … in Psych. Sci.: Laypeople can predict social science replications about 59% of the time.