Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

44th Annual Convention; San Diego, CA; 2018

Event Details


Previous Page

 

Symposium #334
CE Offered: BACB
Recent Advances on the Use, Analysis, and Validity of Single-Case Designs in Practice and Research
Sunday, May 27, 2018
5:00 PM–5:50 PM
Marriott Marquis, Grand Ballroom 10-13
Area: PCH/PRA; Domain: Translational
Chair: Marc J. Lanovaz (Université de Montréal)
CE Instructor: Marc J. Lanovaz, Ph.D.
Abstract:

Despite the widespread adoption of single-case designs by behavior analysts, there is still considerable debate as to how to use, analyze, and report the data. Research on the topic is important to develop guidelines and criteria that are empirically derived, which should support practitioners and researchers in their decisions. The purpose of the symposium is to address some of these issues by examining recent advances on the use, analysis, and validity of single-case designs in practice and research. The first presentation will examine guidelines for reporting the results of multiple baseline designs and to what extent these guidelines have been adopted by applied researchers. The second presentation will describe implicit criteria that are used by single-case researchers to determine the effects of independent variables within AB and multiple baseline designs. Finally, the third presentation will review previously published data to examine whether using AB designs in practical settings may be appropriate. Altogether, the presentations will provide an overview of recent research on the use and analysis of single-case designs.

Instruction Level: Intermediate
Keyword(s): AB design, Data analysis, Multiple baseline, Single-case designs
Target Audience:

Practicing behavior analysts and researchers

 
Application of Multiple Baseline Designs in Behavior Analytic Research: Evidence for the Influence of New Guidelines
(Applied Research)
JODI COON (Auburn University), John T. Rapp (Auburn University)
Abstract: The multiple baseline (MBL) design is a single-case experimental design (SCED) that has both research and applied utility. Although the concurrent and nonconcurrent MBL variants are valid designs, each rules out different threats to internal validity. To help clarify these differences, Carr (2005) provided guidelines for graphically depicting and distinguishing between concurrent and nonconcurrent MBLs. This study assessed the extent to which Carr’s guidelines have been adopted by examining SCED studies published in three behavior-analytic journals from 2000 to 2015. A total of 1,636 articles were reviewed for this study. Results show that there were increases in researchers’ adherence to guidelines provided by Carr (2005). For example, from 2000 to 2005 there were no graphed CMBLs that were described as a CMBL in the respective manuscript. After 2006, there was a substantial increase in CMBL specification in most years; however, the percentage of CMBL graphs specified as such in the manuscript was still consistently below 50%. These findings suggest that SCED researchers are adhering more closely to Carr’s guidelines for graphically depicting CMBLs than to specifying the use of a CMBL in the manuscript; however, researchers’ adherence is not optimal for either guideline. As a whole, results suggest that the stipulations set forth by Carr influenced research practice in that researchers not only increased the specification of the MBL variant, but they also aligned their data in a way that was congruent with the specified MBL variation to a greater extent after 2005.
 
Criteria for Determining Behavior Change in AB and Multiple Baseline Designs
(Applied Research)
MARISSA A. NOVOTNY (University of South Florida), Andrew L. Samaha (University of South Florida), Diego Valbuena (University of South Florida)
Abstract: This study attempts to describe implicit criteria used by researchers to identify effects in AB and Multiple Baseline Designs (MBL). We extracted raw data from 100 published articles published across 36 journals between the years 2012 and 2015, and calculated the effect size, percentage of overlapping data points, and standard deviation for each tier of 177 MBL graphs. Data were then separated and analyzed depending on if the authors said if intervention was effective or not and means for effect size, percentage of overlap, and standard deviation were calculated. Results showed that there was no observable difference in standard deviation between graphs in which the authors said there was an effect versus when they did not say there was an effect. However, the effect size was greater and percent overlap between baseline and treatment data was smaller when authors said there was an effect versus when they did not. These results indicate that authors may take into consideration data features that roughly correspond to the number of overlapping data points and the overall increase or decrease between baseline and treatment phases when identifying effects.
 

Using Single-Case Designs in Practical Settings: Is Replication Always Necessary?

(Service Delivery)
MARC J. LANOVAZ (Université de Montréal), Stephanie Turgeon (Université de Montréal), Patrick Cardinal (École de technologie supérieure), Tara L. Sankey (Halton Catholic District School Board)
Abstract:

Behavior analysts have widely adopted single-case experimental designs to demonstrate and replicate the effects of treatments on behavior. However, the withdrawal of treatment, which is central to most of these designs, may not be desirable, feasible, or even ethical in practical settings. To address this issue, we extracted 501 ABAB graphs from theses and dissertations to examine to what extent we would have reached correct or incorrect conclusions if we had based our analysis on the initial AB component only. In our first experiment, we examined the proportion of datasets for which the results of the first AB component matched the results of the subsequent phase reversals. In our second experiment, we calculated three effect size estimates for the same datasets to examine whether these measures could predict the relevance of conducting a replication. Our results indicated that the initial effects were successfully replicated at least once in approximately 85% of cases and that effect size may predict the probability of replication. Overall, our study suggests that practitioners may not need to conduct replications when the implementation of an empirically-supported treatment produces (a) clear change with a large effect size or (b) no clear change with a small effect size.

 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}