Crowdsourcing Hypothesis Tests: Making transparent how design choices shape research results

dc.authorscopusid55946320600
dc.authorscopusid57222730871
dc.authorscopusid57862538600
dc.authorscopusid57203360628
dc.authorscopusid57190064552
dc.authorscopusid23984790800
dc.authorscopusid7103162936
dc.contributor.authorLandy, J.F.
dc.contributor.authorLiam, M.
dc.contributor.authorDing, I.L.
dc.contributor.authorViganola, D.
dc.contributor.authorTierney, W.
dc.contributor.authorDreber, A.
dc.contributor.authorJohannesson, M.
dc.date.accessioned2024-07-18T20:16:56Z
dc.date.available2024-07-18T20:16:56Z
dc.date.issued2020
dc.description.abstractTo what extent are research results influenced by subjective decisions that scientists make as they design studies? Fifteen research teams independently designed studies to answer five original research questions related to moral judgments, negotiations, and implicit cognition. Participants from 2 separate large samples (total N = 15,000) were then randomly assigned to complete 1 version of each study. Effect sizes varied dramatically across different sets of materials designed to test the same hypothesis: Materials from different teams rendered statistically significant effects in opposite directions for 4 of 5 hypotheses, with the narrowest range in estimates being d = —0.37 to + 0.26. Meta-analysis and a Bayesian perspective on the results revealed overall support for 2 hypotheses and a lack of support for 3 hypotheses. Overall, practically none of the variability in effect sizes was attributable to the skill of the research team in designing materials, whereas considerable variability was attributable to the hypothesis being tested. In a forecasting survey, predictions of other scientists were significantly correlated with study results, both across and within hypotheses. Crowdsourced testing of research hypotheses helps reveal the true consistency of empirical support for a scientific claim. © 2020, American Psychological Association.en_US
dc.identifier.doi10.1037/bul0000220
dc.identifier.endpage479en_US
dc.identifier.issn0033-2909
dc.identifier.issue5en_US
dc.identifier.pmid31944796en_US
dc.identifier.scopus2-s2.0-85081412411en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage451en_US
dc.identifier.urihttps://doi.org/10.1037/bul0000220
dc.identifier.urihttps://hdl.handle.net/11411/6311
dc.identifier.volume146en_US
dc.indekslendigikaynakScopusen_US
dc.indekslendigikaynakPubMeden_US
dc.language.isoenen_US
dc.publisherAmerican Psychological Associationen_US
dc.relation.ispartofPsychological Bulletinen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectConceptual Replicationsen_US
dc.subjectCrowdsourcingen_US
dc.subjectForecastingen_US
dc.subjectResearch Robustnessen_US
dc.subjectScientific Transparencyen_US
dc.subjectAdulten_US
dc.subjectCrowdsourcingen_US
dc.subjectHumanen_US
dc.subjectMethodologyen_US
dc.subjectProceduresen_US
dc.subjectPsychologyen_US
dc.subjectRandomizationen_US
dc.subjectAdulten_US
dc.subjectCrowdsourcingen_US
dc.subjectHumansen_US
dc.subjectPsychologyen_US
dc.subjectRandom Allocationen_US
dc.subjectResearch Designen_US
dc.titleCrowdsourcing Hypothesis Tests: Making transparent how design choices shape research results
dc.typeArticle

Dosyalar