Faculty Articles

Crowdsourcing Hypothesis Tests: Making Transparent How Design Choices Shape Research Results

Author(s)

Justin F. Landy, Nova Southeastern UniversityFollow
Miaolei Liam Jia, University of WarwickFollow
Isabel L. Ding, National University of SingaporeFollow
Domenico Viganola, George Mason University
Warren Tierney, University of Limerick
Anna Dreber, Stockholm School of EconomicsFollow
Magnus Johannesson, Stockholm School of EconomicsFollow
Thomas Pfeiffer, Massey UniversityFollow
Charles Ebersole, University of VirginiaFollow
Quentin F. Gronau, University of AmsterdamFollow
Alexander Ly, University of AmsterdamFollow
Don van den Bergh, University of AmsterdamFollow
Maarten Marsman, University of AmsterdamFollow
Koen Derks, Nyenrode Business UniversityFollow
Eric-Jan Wagenmakers, University of AmsterdamFollow
Andrew Proctor, Stockholm School of EconomicsFollow
Daniel M. Bartels, University of ChicagoFollow
Christopher W. Bauman, University of California, IrvineFollow
William J. Brady, New York UniversityFollow
Felix Cheung, University of Hong KongFollow
Andrei Cimpian, New York UniversityFollow
Simone Dohle, University of CologneFollow
M Brent Donnellan, Michigan State UniversityFollow
Adam Hahn, University of CologneFollow
Michael P. Hall, University of MichiganFollow
William Jiménez-Leal, University of the AndesFollow
David J. Johnson, University of Maryland at College ParkFollow
Richard E. Lucas, Michigan State UniversityFollow
Benoît Monin, Stanford UniversityFollow
Andres Montealegre, University of the AndesFollow
Elizabeth Mullen, San Jose State UniversityFollow
Jun Pang, Renmin University of ChinaFollow
Jennifer Ray, New York UniversityFollow
Diego A. Reinero, New York UniversityFollow
Jesse Reynolds, Stanford UniversityFollow
Walter J. Sowden, University of MichiganFollow
Daniel Storage, University of DenverFollow
Runkun Su, National University of SingaporeFollow
Christina M. Tworek, HarrisX
Jay Van Bavel, New York UniversityFollow
Daniel Walco, New York Yankees
Julian Wills, New York UniversityFollow
Xiaobing Xu, Hainan UniversityFollow
Kai Chi Yam, National University of SingaporeFollow
William A. Cunningham, University of TorontoFollow
Martin Schweinsberg, European School of Management and TechnologyFollow
Molly Urwitz, Stockholm School of Economics
Eric L. Uhlmann, INSEADFollow

Document Type

Article

Publication Date

5-2020

Publication Title

Psychological Bulletin

Volume

146

Issue/Number

5

ISSN

1939-1455

Abstract/Excerpt

To what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from two separate large samples (total N > 15,000) were then randomly assigned to complete one version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: materials from different teams rendered statistically significant effects in opposite directions for four out of five hypotheses, with the narrowest range in estimates being d = -0.37 to +0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for two hypotheses, and a lack of support for three hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, while considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim.

DOI

https://doi.org/10.1037/bul0000220

PubMed ID

31944796

Peer Reviewed

Find in your library

Share

COinS