|
If at First You Do Succeed, Try, Try Again: The Importance of Reproducibility in Behavior Science |
Sunday, May 26, 2019 |
8:00 AM–8:50 AM |
Swissôtel, Lucerne Ballroom Level, Lucerne 3 |
Area: PCH/TBA; Domain: Theory |
Chair: Kelly M. Banna (Millersville University of Pennsylvania) |
Discussant: M. Christopher Newland (Auburn University) |
Abstract: In 2015, the Open Science Collaboration (OSC) published a paper in Science describing a massive undertaking to replicate 100 previously published studies in the areas of social and cognitive psychology. The OSC reported a success rate under 50%, prompting concern over what is now known as the “replication crisis” in psychology. Subsequent research has yielded similar results in economics (Camerer et al., 2016) and the social sciences more broadly (Camerer et al., 2018), and recent methodological trends in applied behavior analytic research may be leading the field into a reproducibility crisis of our own. This is problematic, as reproducibility is the gold standard by which treatment effects and relations among variables are judged to be “real.” One silver lining is a renewed appreciation within the scientific community of the importance of replication, which has ben undervalued for decades. This symposium will address the following topics: a) identifying and leveraging contingencies that support replication-based research, b) the strengths (and limitations) of behavior-analytic methods in evaluating phenomena that have been difficult to reproduce using group designs, and c) correcting the methodological “drift” in applied research that may be compromising the reproducibility of our results. |
Instruction Level: Intermediate |
Keyword(s): Applied Research, Replication Crisis, Reproducibility, Research Methods |
|
Replicators, Mount Up!: A Behavior Analytic Approach to Addressing the Reproducibility Crisis |
KELLY M. BANNA (Millersville University of Pennsylvania) |
Abstract: Recent research has led to concerns that the social sciences are in the midst of a replication crisis (Open Science Collaboration, 2015; Camerer et al., 2016; Camerer et al., 2018). This presentation will discuss some of the contingencies that have led to this point, and identify two ways in which behavior analysts can address this problem. First, those of us who hold faculty positions at smaller, primarily undergraduate institutions are ideally situated to design and carry out replication studies. This is largely due to differences in the contingencies associated with career advancement (e.g., funding, scholarship). Several examples of how this can be accomplished in the context of undergraduate honors projects and laboratory courses will be provided. Second, where appropriate, behavior analysts can a) apply our unique skill set (e.g., single subject research designs) to the study of phenomena whose reliability has been called into question and b) increase efforts to expand the use of these methods in other disciplines. |
|
Do We Know Less Than We Think We Do?: Reproducibility in the Applied Behavior Analysis Literature |
JAMES M. JOHNSTON (Auburn University (Retired)) |
Abstract: When Applied Behavior Analysis (ABA) emerged from its parent science, it continued a comprehensive and demonstrably effective set of methodological practices, summarized by Sidman’s 1960 book, Tactics of Scientific Research. As applied research interests developed over the ensuing decades, however, ABA research incorporated a number of methodological practices that have increasingly raised concerns about the reproducibility of findings in this literature. These practices are in one sense understandable in light of the applied agenda and the complications of applied research circumstances. Nevertheless, they conflict with the long-held best practices of the field’s research methods and unavoidably lead to concerns about the reproducibility of research outcomes. This presentation will review only two of these problem areas: 1) Using discontinuous measurement procedures and 2) Accepting limited pictures of responding in compared conditions. These and other methodological challenges suggest that research findings in the ABA literature may be less informative than we believe. One risk of this tendency is that practitioners may increasingly come to view ABA research as more a source of procedural ideas or treatment options than a body of findings that can be trusted to yield dependable outcomes. |
|
|