|
Innovations in the Use Single-Case Methodology: Artificial Intelligence, Aids to Clinical Decision-Making, and Hybrid Designs |
Monday, May 25, 2020 |
8:00 AM–9:50 AM |
Virtual |
Area: PCH; Domain: Translational |
Chair: Marc J. Lanovaz (Université de Montréal) |
CE Instructor: Marc J. Lanovaz, Ph.D. |
Abstract: Single-case designs have been central to the development of a science of behavior analysis. However, other health and social sciences have not embraced their adoption as widely as behavior analysts. Potential explanations for this lack of adoption include the complexity of analyzing single-case data objectively as well as the limited consideration of group data. The purpose of our symposium is to present recent research that addresses the aforementioned limitations. The first presentation will describe a script designed to automatically analyze functional analysis data based on previously published rules. The second presentation will examine whether artificial intelligence can accurately make decisions using AB graphs. The third presentation will discuss the validity of using nonoverlap effect size measures to aid clinical decision-making. The final presentation will introduce hybrid designs, which involve a combination of single-case and group methodologies. As whole, the presentations will provide an overview of innovations in the use of single-case methodology for both practitioners and researchers. |
Target Audience: BCBAs BCBA-Ds |
|
Artificial Intelligence to Analyze Single-Case Data |
MARC J. LANOVAZ (Université de Montréal), Antonia R. Giannakakos (Manhattanville College), Océane Destras (Polytechnique Montréal) |
Abstract: Visual analysis is the most commonly used method for interpreting data from single-case designs, but levels of interrater agreement remain a concern. Although structured aids to visual analysis such as the dual-criteria (DC) method may increase interrater agreement, the accuracy of the analyses may still benefit from improvements. Thus, the purpose of our study was to (a) examine correspondence between visual analysis and models derived from different machine learning algorithms, and (b) compare the accuracy, Type I error rate and power of each of our models with those produced by the DC method. We trained our models on a previously published dataset and then conducted analyses on both nonsimulated and simulated graphs. All our models derived from machine learning algorithms matched the interpretation of the visual analysts more frequently than the DC method. Furthermore, the machine learning algorithms outperformed the DC method on accuracy, Type I error rate, and power. Our results support the somewhat unorthodox proposition that behavior analysts may use machine learning algorithms to supplement their visual analysis of single-case data, but more research is needed to examine the potential benefits and drawbacks of such an approach. |
|
Using AB Designs With Nonoverlap Effect Size Measures to Support Clinical Decision Making: A Monte Carlo Validation |
ANTONIA R. GIANNAKAKOS (Manhattanville College), Marc J. Lanovaz (Université de Montréal) |
Abstract: Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below .05, and then examined if using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice. |
|
Automating Functional Analysis Interpretation |
JONATHAN E. FRIEDEL (National Institute for Occupational Safety and Health), Alison Cox (Brock University) |
Abstract: Functional analysis (FA) has been an important tool in behavior analysis. The goal of an FA is to determine problem behavior function (e.g., access to attention) so that treatments can be designed to target causal mechanisms (e.g., teaching a socially appropriate response for attention). Behavior analysts traditionally rely on visual inspection to interpret an FA. However, existing literature suggests interpretations can vary across clinicians (Danov & Symons, 2008). To increase objectivity and address interrater agreement across FA outcomes, Hagopian et al. (1997) created visual-inspection criteria to be used for FAs. Hagopian and colleagues reported improved agreement but limitations of the criteria were noted. Therefore, Roane, Fisher, Kelley, Mevers, and Bouxsein (2013) addressed these limitations when they created a modified version. Here, we describe a computer script designed to automatically interpret FAs based on the above-mentioned criteria. A computerized script may be beneficial because it requires objective criteria (e.g., 10% higher vs. ‘substantially’ higher) to make decisions and it is fully replicable (i.e., does not rely on interobserver agreement). We outline several areas where the published criteria required refinement for the script. We also identify some conditions in which the script provides interpretations that disagree with expert clinician interpretations. |
|
|