Loading Session...

Replication and Reproducibility

Session Information

03 Nov 2018 01:30 PM - 03:30 PM(America/Los_Angeles)
Venue : University (Fourth Floor Union Street Tower)
20181103T1330 20181103T1530 America/Los_Angeles Replication and Reproducibility University (Fourth Floor Union Street Tower) PSA2018: The 26th Biennial Meeting of the Philosophy of Science Association office@philsci.org

Presentations

Why Replication is Overrated

Philosophy of Science 01:30 PM - 02:00 PM (America/Los_Angeles) 2018/11/03 20:30:00 UTC - 2018/11/03 21:00:00 UTC
Uljana Feest (Leibniz Universität Hannover)
Current debates about the replication crisis in psychology take it for granted that direct replication is valuable and focus their attention on questionable research practices in regard to statistical analyses. This paper takes a broader look at the notion of replication as such. It is argued that all experimentation/replication involves individuation judgments and that research in experimental psychology frequently turns on probing the adequacy of such judgments. In this vein, I highlight the ubiquity of conceptual and material questions in research, and I argue that replication is not as central to psychological research as it is sometimes taken to be.
Presenters Uljana Feest
Leibniz University Hannover

The Causes of the Reproducibility Crisis

Philosophy of Science 02:00 PM - 02:30 PM (America/Los_Angeles) 2018/11/03 21:00:00 UTC - 2018/11/03 21:30:00 UTC
Rafael Ventura (Duke University)
Science is a social enterprise. To make progress, scientists must assume that the results of others are at least in principle reproducible. But empirical studies show that researchers across different disciplines often fail to reproduce results from previous experiments. To explain this lack of reproducibility, two main hypotheses have been proposed: (1) publication bias is the main cause of low reproducibility; (2) lax standard of statistical significance is the main cause of low reproducibility. Here, I present a model to adjudicate between these two hypotheses. Model results suggest that publication bias may play a more important role than current levels of statistical significance in promoting low reproducibility.
Presenters
RV
Rafael Ventura
Duke University

Self-Correction in Science: Meta-Analysis, Bias and Social Structure

Philosophy of Science 02:30 PM - 03:00 PM (America/Los_Angeles) 2018/11/03 21:30:00 UTC - 2018/11/03 22:00:00 UTC
Justin Bruner (Australian National University), Bennett Holman (Underwood International College, Yonsei University)
Concern over the reproducibility of experimental work in the social sciences has motivated some to re-examine the extent to which science can be said to be self-correcting. We consider a recent argument put forth by Romero (2016) that science is unlikely to self-correct because of its social structure and the norms that govern publication practices. We contend this understanding of scientific self-correction is misguided and argue that self-correction is possible but requires both a norm of truth seeking and a commitment to the development of new inferential techniques and data aggregation procedures.
Presenters
JB
Justin Bruner
Australian National University
Co-Authors
BH
Bennett Holman
Underwood International College, Yonsei University

The Replication Crisis in Psychology and Its Constructive Role in Philosophy of Statistics

Philosophy of Science 03:00 PM - 03:30 PM (America/Los_Angeles) 2018/11/03 22:00:00 UTC - 2018/11/03 22:30:00 UTC
Deborah Mayo (Virginia Tech)
This paper discusses the 2011-2015 Reproducibilty Project, an attempt to replicate published statistically significant results in psychology. We set out key elements of significance tests, often misunderstood. While intended to bound the probabilities of erroneous interpretations of data, this error control is nullified by cherry-picking, multiple testing, and other biasing selection effects. However, the reason to question the resulting inference is not a matter of poor long-run error rates, but rather that it has not been well-tested by these data. This provides a rationale never made clear by significance testers as to the inferential relevance of error probabilities.
Presenters Deborah Mayo
Virginia Tech
1075 visits

Session Participants

Online
Session speakers, moderators & attendees
Leibniz University Hannover
Duke University
Australian National University
Virginia Tech
University of Pittsburgh
Université du Québec à Trois-Rivières
55 attendees saved this session

Session Chat

Live Chat
Chat with participants attending this session

Questions & Answers

Answered
Submit questions for the presenters

Session Polls

Active
Participate in live polls

Need Help?

Technical Issues?

If you're experiencing playback problems, try adjusting the quality or refreshing the page.

Questions for Speakers?

Use the Q&A tab to submit questions that may be addressed in follow-up sessions.