Predicting and reasoningabout replicability usingstructured groups

Paper by Bonnie Wintle, Eden T. Smith, Martin Bush, Fallon Mody, David P. Wilkinson, Anca M. Hanea, Alex Marcoci, Hannah Fraser, Victoria Hemming, Felix Singleton Thorn, Marissa F. McBride, Elliot Gould, Andrew Head, Daniel G. Hamilton, Steven Kambouris, Libby Rumpff, Rink Hoekstra, Mark A. Burgman, Fiona Fidler
Published on 07 June 2023

Abstract

This paper explores judgements about the replicability of socialand behavioural sciences research and what drives thosejudgements. Using a mixed methods approach, it draws onqualitative and quantitative data elicited from groups using astructured approach called the IDEA protocol (‘investigate’,‘discuss’,‘estimate’and‘aggregate’). Five groups of fivepeople with relevant domain expertise evaluated 25 researchclaims that were subject to at least one replication study. 

Participants assessed the probability that each of the 25 research claims would replicate (i.e. that areplication study would find a statistically significant result in the same direction as the originalstudy) and described the reasoning behind those judgements. We quantitatively analysed possiblecorrelates of predictive accuracy, including self-rated expertise and updating of judgements afterfeedback and discussion. We qualitatively analysed the reasoning data to explore the cues,heuristics and patterns of reasoning used by participants. Participants achieved 84% classificationaccuracy in predicting replicability. Those who engaged in a greater breadth of reasoning providedmore accurate replicability judgements. Some reasons were more commonly invoked by moreaccurate participants, such as‘effect size’and‘reputation’(e.g. of the field of research). There wasalso some evidence of a relationship between statistical literacy and accuracy.

Read full paper

Subscribe to our mailing list to get our latest updates