© 2016 by Arthur Lupia and Brendan Nyhan.

  • Facebook App Icon
  • Twitter App Icon

Why preacceptance?

Concerns about scientific practice

The preregistration approach

The value of preacceptance

 

Science provides two main services to society: information through the data it collects and meaning through its rigorous analyses and interpretations of these data. When this process works as it should, science can help a wide range of individuals, communities, and organizations better understand the natural and social worlds and improve our quality of life.

 

Concerns about scientific practice

Today, however, important concerns are being raised whether science is working as it should. Multiple studies document the distorting effects of questionable research practices and publication bias on the accuracy, replicability, and intepretability of numerous findings in many scientific fields (e.g., Doucouliagos 2005; Ioannidis 2005; Masicampo and Lalande 2012; Dwan et al. 2013, Esarey and Wu N.d.). In particular, publication decisions appear to be closely related to whether or not manuscripts include statistically significant findings (Gerber and Malhotra 2008; Gerber et al. 2010; Franco, Malhotra, and Simonovits 2014, 2015), creating incentives for post hoc searches for statistically significant findings (“p-hacking”) or submitting only those parts of a research agenda that produce significant results (the “file drawer problem”).

 

The combination of p-hacking, file drawer problems, and biases towards statistical significance can create a mismatch between what a published literature claims about a topic and what the true weight of the evidence would show if all of it were allowed to see the light of day. As a result, even a finding that seems to be the product of a scientific consensus could instead be an illusion built upon questionable research and publication practices.

 

The preregistration approach

              

Preregistration is one way to mitigate these problems (e.g., Gerber and Malhotra 2008; Humphreys, de la Sierra, and van der Windt 2013; Monogan 2013; Miguel et al. 2014). With preregistration, a scholar specifies hypotheses and analyses prior to obtaining data. In principle, this approach prevents authors from misrepresenting their initial analytic intent and dissuades them from engaging in post hoc specification searches. (Scholarly papers may, of course, include unplanned or exploratory analyses. Preregistration helps readers differentiate between such efforts and hypothesis testing.)

 

However, preregistration is not sufficient to overcome publication bias. Even in fields where preregistration is increasing, biases towards statistical significance in publication processes can still distort a scientific field’s knowledge base (e.g., Dwan et al. 2013). In fields that require preregistration of clinical trials, for example, many trials are not submitted for publication if the results are unexpected or null (e.g., Turner et al. 2008), while those studies that are published often selectively report the outcomes they observe (Drysdale 2016).

 

The value of preacceptance


To further correct the types of incentives described above, we believe it is necessary to expand the use of preacceptance in scientific publishing. Under this approach, authors submit articles that motivate and specify a research design before data for the study is available. These articles are reviewed and either rejected or provisionally accepted without authors, reviewers, or editors knowing the results of the study (e.g., Nosek and Lakens 2015; Nyhan 2015).

 

When journals allow preacceptance, reviewers can evaluate manuscripts based solely on the value of the theoretical contribution and the strength of the research design, not the presence or absence of statistical significance. After a preaccepted article’s data is collected, the article is reviewed again to ensure that preregistered results are reported accurately and conform to the content of the initial manuscript. For example, in cases where a preaccepted study includes a preregistered design that produces a null result, the researcher is expected to present that result. The researcher may also describe other analyses that put the initial findings in a broader context, but these must be properly labeled as exploratory and distinguished from prespecified tests. Such processes will lead to more accurate presentations of scientific research by reducing publication pressures on researchers to partially represent or misrepresent their findings.

 

While preacceptance has the potential to boost the professional benefits of preregistration and other integrity-increasing practice, very little of that potential is currently being realized. Preacceptance is still relatively rare in academic journals, especially in political science. The Registered Reports initiative has made progress in popularizing this approach, especially in psychology and neuroscience, but it remains unfamiliar to most scholars.

 

This competition is designed to create new incentives for scholars and journals in political science to publish high-quality articles that use preregistered research designs about an important upcoming event (the 2016 election). The idea is not only to produce important knowledge about the event itself but also to provide a model for increasing research integrity and changing publication and review processes in the discipline and other fields.

 

In addition, this project can also offer a constructive framework for new scientific communities to consider preregistration. Many people incorrectly believe that preregistration and preacceptance can only be used in experimental research. This is not true (see, e.g., Monogan 2013). The 2016 Election Research Preacceptance Competition provides a test case for people who want to observe how preregistration and preacceptance affect the content and credibility of observational (non-experimental) research – a potentially important step in broadening the appeal of this approach to the many scientific fields and disciplines that primarily use observational data.

 

References

 

Doucouliagos, Chris. 2005. “Publication bias in the economic freedom and economic growth literature.” Journal of Economic Surveys 19 (3): 367–387.

 

Drysdale, Henry. 2016. “Post-hoc ‘pre-specification’ and undeclared deferral of results: a broken record in the making.” COMPARE, January 29, 2016.

 

Dwan, Kerry, Carrol Gamble, Paula R.Williamson, and Jamie J. Kirkham. 2013. “Systematic review of the empirical evidence of study publication bias and outcome reporting bias—An updated review.” PLOS One 8 (7): e66844.

 

Esarey, Justin, and AhraWu. N.d. “The fault in our stars: Measuring and correcting significance bias in Political Science.” Unpublished manuscript.

 

Franco, Annie, Neil Malhotra, and Gabor Simonovits. 2014. “Publication bias in the social sciences: Unlocking the file drawer.” Science 345 (6203): 1502–1505.

 

Franco, Annie, Neil Malhotra, and Gabor Simonovits. 2015. “Underreporting in Political Science Survey Experiments: Comparing Questionnaires to Published Results.” Political Analysis 23 (2): 306–312.

 

Gerber, Alan, and Neil Malhotra. 2008. “Do Statistical Reporting Standards Affect What Is Published? Publication Bias in Two Leading Political Science Journals.” Quarterly Journal of Political Science 3: 313–326.

 

Gerber, Alan S., Neil Malhotra, Conor M. Dowling, and David Doherty. 2010. “Publication bias in two political behavior literatures.” American Politics Research 38 (4): 591–613.

 

Humphreys, Macartan, Raul Sanchez de la Sierra, and Peter van derWindt. 2013. “Fishing, commitment, and communication: A proposal for comprehensive nonbinding research registration.” Political Analysis 21 (1): 1–20.

 

Ioannidis, John P.A. 2005. “Why most published research findings are false.” PLoS medicine 2 (8): e124.

 

Masicampo, E.J., and Daniel R. Lalande. 2012. “A peculiar prevalence of p values just below. 05.” Quarterly Journal of Experimental Psychology 65 (11): 2271–2279.

 

Miguel, E., C. Camerer, K. Casey, J. Cohen, K. M. Esterling, A. Gerber, R. Glennerster, D. P. Green, M. Humphreys, G. Imbens, D. Laitin, T. Madon, L. Nelson, B. A. Nosek, M. Petersen, R. Sedlmayr, J. P. Simmons, U. Simonsohn, and M. Van der Laan. 2014. “Promoting Transparency in Social Science Research.” Science 343 (6166): 30–31.

 

Monogan, James E. 2013. “A case for registering studies of political outcomes: An application in the 2010 House elections.” Political Analysis 21 (1): 21–37.

 

Nosek, Brian A., and Daniel Lakens. 2015. “Registered reports.” Social Psychology 45: 137-141.

 

Nyhan, Brendan. 2015. “Increasing the Credibility of Political Science Research: A Proposal for Journal Reforms.” PS: Political Science & Politics 48 (S1): 78–83.

 

Turner, Erick H., Annette M. Matthews, Eftihia Linardatos, Robert A. Tell, and Robert Rosenthal. 2008. “Selective publication of antidepressant trials and its influence on apparent efficacy.” New England Journal of Medicine 358 (3): 252–260.