|
Methodological Research in Applied Behavior Analysis |
Saturday, May 23, 2020 |
5:00 PM–5:50 PM |
Walter E. Washington Convention Center, Level 1, Salon G |
Area: DDA/AUT; Domain: Applied Research |
Chair: SungWoo Kahng (Rutgers University) |
CE Instructor: SungWoo Kahng, Ph.D. |
Abstract: One of the strengths of applied behavior analysis is its reliance on strong methodology. We develop systematic methods of measuring data, which we will summarize in a graphical manner. We then rely on visual analysis of these data to determine treatment efficacy as well as guide our decision making. Finally, we have additional observers concurrently collect data so that we can calculate interobserver agreement to confirm the consistency of our data collection. This process leads to the objective and precise measurement and evaluation of our data, which allows us to have confidence in our assessment and treatment outcomes. This symposium will (a) review our visual analysis practice in research, (b) examine a novel method of using simulation modeling analyses to determine statistical significance of single-case data, and (c) evaluate how often it may be necessary to calculate interobserver agreement. These presentations will highlight state of the art research on methodological issues related to practice and research in applied behavior analysis. |
Instruction Level: Advanced |
Keyword(s): interobserver agreement, methodology, visual analysis |
Target Audience: Advanced behavior analysts |
|
A Review of Visual Analysis Reporting Procedures in the Journal of Applied Behavior Analysis |
KATIE WOLFE (University of South Carolina), Meka McCammon (University of South Carolina) |
Abstract: Most studies in applied behavior analysis use single-case research (SCR) methodology to examine causal relations between variables. In SCR, visual analysis is the primary method by which data are evaluated to determine the presence or absence of causal relations. However, a growing body of research suggests that visual analysis may be unreliable under many circumstances (e.g., Wolfe, Seaman, & Drasgow, 2016). One reason for this lack of reliability may be the absence of clear procedures for conducting visual analysis (Barton, Meadan, & Fettig, 2019), which may contribute to inconsistent interpretation of data across analysts. The purpose of this study is to review recent SCR designs published in the Journal of Applied Behavior Analysis (2014 – 2018) to provide a descriptive analysis of 1) the prevalence of SCR, 2) the types of SCR designs used, 3) visual analysis procedures reported by authors, and 4) conclusions drawn by authors as a result of their visual analysis. Preliminary results indicate that SCR designs make up the vast majority of articles published in JABA, with multiple baseline and multiple treatment designs being the most common designs used. Full results, which may inform future research and reporting standards for visual analysis, will be discussed. |
|
Comparison of Visual Analysis Outcomes and Simulation Modeling Analysis Outcomes in A-B Designs |
SCOTT SPARROW (University of Kansas), Bertilde U Kamana (University of Kansas), Claudia L. Dozier (The University of Kansas), Derek D. Reed (University of Kansas), Nicole Kanaman (University of Kansas) |
Abstract: We used behavioral skills training and on-the job feedback (Parsons, Rollyson, & Reid, 2012) to increase staff use of four “healthy behavioral practices” (e.g., provide positive interactions, provide effective instruction) in 18 homes and programs serving adults with disabilities. Due to various logistical aspects, we used an AB design (baseline and intervention conditions) across the 18 homes and programs and the four practices to determine the effects of our intervention. Visual analysis outcomes suggested increases in correct staff behavior from baseline to the intervention phase across homes and programs, as well as across practices in many instances. As an additional evaluation of our effects, we conducted statistical analyses of these data using simulation modeling analyses (SMA; Borckartdt et al., 2008), which allows clinical researchers to determine the statistical significance of single-subject data. We compared the outcomes of SMA to visual analysis of the AB design data for data sets in which visual analysis suggested a clear outcome. This allowed us to determine the degree to which visual analysis and the outcome of the SMA matched (i.e., showed a true positive or true negative outcome). Overall, most results suggested true positive or true negative outcomes across the two analyses. |
|
Interobserver Agreement: How Much is Enough? |
Nicole Lynn Hausman (Kennedy Krieger Institute), Noor Javed (Kennedy Kreiger Institute), MOLLY K BEDNAR (Kennedy Krieger Institute), Madeleine Guell (Johns Hopkins University), Erin Schaller (Little Leaves Behavioral Services), Rose Nevill (University of Virginia), SungWoo Kahng (Rutgers University) |
Abstract: The collection of data that are reliable and valid is critical to applied behavior analysis (e.g., Kazdin, 1977; Kennedy, 2005). Although there are guidelines for selecting the most appropriate measure of interobserver agreement (IOA), there is little empirical support to guide how much IOA is needed overall. Current guidelines suggest that IOA be calculated for 20%-33% of sessions (e.g., Kennedy, 2005; Poling et al., 1995); however, practical limitations may influence the actual percentage of sessions that a second observer is available. The purpose of the current study was to provide preliminary guidelines for determining the optimal amount of IOA to report by simulating various percentages of overall IOA. Data from multielement FAs of inpatients (N= 100) were used, and the total number of sessions with IOA for each participant was subsequently manipulated such that 30%, 25%, 15% and 10% IOA could be calculated and compared using statistical analyses. Results suggested that no significant differences in IOA were obtained at the total IOA cutoffs simulated; however, the IOA scores were sensitive to response rate and varied depending on the type of IOA evaluated. |
|
|