|
Methodological Considerations in Applied Behavior Analysis Practice and Research |
Saturday, May 29, 2021 |
5:00 PM–6:50 PM |
Online |
Area: TBA/AUT; Domain: Applied Research |
Chair: SungWoo Kahng (Rutgers University) |
Discussant: Marc J. Lanovaz (Université de Montréal) |
CE Instructor: SungWoo Kahng, Ph.D. |
Abstract: One of the seven dimensions of applied behavior analysis (Baer, Wolf, & Risley, 1968) is “analytic,” which requires a believable demonstration that the independent variable is responsible for the change in the dependent variable. To meet this goal, behavior analysts take great care to demonstrate functional control using single-case experimental designs. Data are analyzed using visual inspection and reliability is measured to determine the consistency of the collected data. This symposium will focus on recent research focused on visual inspection, threats to single-case designs, and interobserver agreement. The first paper will focus on the use of A-B designs in practice. The second paper will focus on the use of visual analyses to measure outcomes during functional communication training. The third paper will provide a discussion of threats to internal validity of multiple baseline designs. The final paper will provide an examination of how much interobserver agreement is sufficient to provide confidence in the consistency of the data. |
Instruction Level: Advanced |
Keyword(s): FCT, interobserver agreement, multiple baseline, visual inspection |
Target Audience: Experiment with visual analyses of behavioral data, single-case experimental design, and interobserver agreement. |
Learning Objectives: At the conclusion of the presentation, participants will be able to:
1. Evaluate the utility of A-B design;
2. Determine the state of visual analyses of FCT data;
3. Identify threats to internal validity of multiple baseline; and
4. Determine how much interobserver agreement should be collected in practice. |
|
Comparison of Visual Analysis Outcomes and Simulation Modeling Analysis Outcomes in A-B Designs |
NICOLE KANAMAN (The University of Kansas), Bertilde U Kamana (The May Institute ), Claudia L. Dozier (The University of Kansas), Derek D. Reed (University of Kansas) |
Abstract: We used behavioral skills training and on-the job feedback (Parsons, Rollyson, & Reid, 2012) to increase staff use of four “healthy behavioral practices” (e.g., provide positive interactions, provide effective instruction) in 18 homes and programs serving adults with disabilities. Due to various logistical aspects, we used an AB design (baseline and intervention conditions) across the 18 homes and programs and the four practices to determine the effects of our intervention. Visual analysis outcomes suggested increases in correct staff behavior from baseline to the intervention phase across homes and programs, as well as across practices in many instances. As an additional evaluation of our effects, we conducted statistical analyses of these data using simulation modeling analyses (SMA; Borckartdt et al., 2008), which allows clinical researchers to determine the statistical significance of single-subject data. We compared the outcomes of SMA to visual analysis of the AB design data for data sets in which visual analysis suggested a clear outcome. This allowed us to determine the degree to which visual analysis and the outcome of the SMA matched (i.e., showed a true positive or true negative outcome). Overall, most results suggested true positive or true negative outcomes across the two analyses. |
|
A Review of Visual Analysis Reporting Procedures in the Functional Communication Training Literature |
AARON CHECK (University of South Carolina), Katie Wolfe (University of South Carolina), Meka McCammon (University of South Carolina) |
Abstract: Most studies in applied behavior analysis use single-case research (SCR) methodology to examine causal relations between variables. In SCR, visual analysis is the primary method by which data are evaluated to determine the presence or absence of causal relations. However, a growing body of research suggests that visual analysis may be unreliable under many circumstances (e.g., Wolfe, Seaman, & Drasgow, 2016). One reason for this lack of reliability may be the absence of clear procedures for conducting visual analysis (Barton, Meadan, & Fettig, 2019), which may contribute to inconsistent interpretation of data across analysts. The purpose of this study is to review the literature on Functional Communication Training (FCT) to provide a descriptive analysis of 1) the types of SCR designs used, 2) the rigor of this literature base relative to the What Works Clearinghouse Design Standards, 3) visual analysis procedures reported by authors, 4) statistical analysis procedures reported by authors, and 5) conclusions drawn by authors as a result of their visual analysis. Preliminary results indicate that 68 studies on basic FCT have been published since 1985, and that the majority of these use a multiple baseline or ABAB design. Approximately 60% of studies met WWC Design Standards with or without reservations. Full results, which may inform future research and reporting standards for visual analysis, will be discussed. |
|
An Analysis of Threats to Internal Validity in Multiple-Baseline Design Variations |
TIMOTHY A. SLOCUM (Utah State University), Sarah E. Pinkelman (Utah State University), P. Raymond Joslyn (Utah State University), Beverly Nichols (Utah State University) |
Abstract: Multiple baseline designs are the predominant experimental design in applied behavior analytic research and are increasingly employed in other disciplines. The consensus in current textbooks and recent methodological papers is that nonconcurrent designs are less rigorous than concurrent designs because of their limited ability to address the threat of coincidental events. In this paper, we argue that this consensus is incorrect. First, we describe features of both types of multiple baseline designs. Second, we suggest an analysis of how the features of each design contribute to, or detract from, achieving strong internal validity. Finally, we conclude that concurrent and nonconcurrent multiple baseline designs are essentially equivalent in rigor with respect to internal validity. We believe that this discussion may result in a better understanding of both concurrent and nonconcurrent multiple baseline designs and shift the discussion from global statements of overall rigor to specific statements about threats that are more or less strongly controlled and specific situations in which each offers more or less control. |
|
Interobserver Agreement: A Preliminary Investigation into How Much is Enough? |
NICOLE HAUSMAN (Full Spectrum ABA), Noor Javed (Kennedy Kreiger Institute), Molly K Bednar (Kennedy Krieger Institute), Madeleine Guell (The Johns Hopkins University), Erin Schaller (Little Leaves Behavioral Services), Rose Nevill (University of Virginia), SungWoo Kahng (Rutgers University) |
Abstract: The collection of data that are reliable and valid is critical to applied behavior analysis (e.g., Kazdin, 1977; Kennedy, 2005). Although there are guidelines for selecting the most appropriate measure of interobserver agreement (IOA), there is little empirical support to guide how much IOA is needed overall. Current guidelines suggest that IOA be calculated for 20%-33% of sessions (e.g., Kennedy, 2005; Poling et al., 1995); however, practical limitations may influence the actual percentage of sessions that a second observer is available. The purpose of the current study was to provide preliminary guidelines for determining the optimal amount of IOA to report by simulating various percentages of overall IOA. Data from multielement FAs of inpatients (N= 100) were used, and the total number of sessions with IOA for each participant was subsequently manipulated such that 30%, 25%, 15% and 10% IOA could be calculated and compared using statistical analyses. Results suggested that no significant differences in IOA were obtained at the total IOA cutoffs simulated; however, the IOA scores were sensitive to response rate and varied depending on the type of IOA evaluated. |
|
|