|Issues in the Visual Analysis of Single-Case Research Data|
|Monday, May 29, 2017|
|8:00 AM–8:50 AM |
|Hyatt Regency, Centennial Ballroom F/G|
|Area: PCH; Domain: Translational|
|Chair: Katie Wolfe (University of South Carolina)|
|CE Instructor: Katie Wolfe, Ph.D.|
Visual analysis is a cornerstone of single-case research, which is the primary methodology used in applied behavior analysis. The three data-based papers in this symposium will explore various issues related to the visual analysis of single-case data. The first paper will examine how authors have described visual analysis procedures and how visual analysis compares selected to non-overlap indices using the literature on parent-implemented function-based interventions. The second paper will evaluate the interrater agreement among experts and between experts and the conservative dual-criterion method (CDC; Fisher, Kelley, & Lomas, 2003) on published multiple baseline designs. The third paper will describe the development of a systematic protocol for the visual analysis and a group design study to evaluate the effects of the protocol on interrater agreement in visual analysis.
|Instruction Level: Intermediate|
|Keyword(s): interrater reliability, single-case research, single-subject research, visual analysis|
Evaluating Visual Analysis and Non-Overlap Indices Using the Literature on Parent Implemented Interventions
|ERIN E. BARTON (Vanderbilt University), Hedda Meadan (University of Illinois at Urbana-Champaign), Angel Fettig (University of Massachusetts Boston)|
Single case research (SCR) has a long history of being used to evaluate behavioral interventions and identify evidence-based practices. Visual analysis is the gold standard for the evaluation of single case data. However, visual analysis might limit the ability of researchers to quantitatively aggregate and compare the magnitude of findings across studies to evaluate evidence-based practices. Further, although multiple protocols for visual analysis exist, the procedures are not standardized, which might lead to differences in conclusions about functional relations. Several computational methods have been developed and are increasingly being applied to SCR to provide a quantitative summary of the effects. Criticisms of these methods point to their inability to account for replication or magnitude, likely disagreement with visual analysis, failure to correct or account for typical data patterns (e.g., trend) or serial dependency. The purpose of the current presentation is to summarize the literature and evaluate the visual analysis procedures used across the literature on parent implemented functional assessment (FA) based interventions. Results indicated that visual analysis terms were inconsistently used across studies. Further, visual analysis procedures were described inconsistently and with few details. The non-overlap indices were unlikely to agree with the authors independent visual analysis of the results.
|An Evaluation of the Agreement Among Expert Visual Analysts and the Conservative Dual Criterion Method|
|KATIE WOLFE (University of South Carolina), Michael Seaman (University of South Carolina), Erik Drasgow (University of South Carolina), Phillip Sherlock (University of South Carolina)|
|Abstract: Visual analysis remains the predominant method of analysis in single-case research (SCR). However, research on the reliability of visual analysis has produced mixed results, with most studies finding poor agreement between visual analysts. This has led to the development of structured criteria for the analysis of SCR data, such as the conservative dual criterion method (CDC; Fisher, Kelley, & Lomas, 2003). In this study, we evaluated agreement a) among 52 expert visual analysts and b) between the visual analysts and the CDC method on 31 published multiple baseline designs at level of the individual tier (or baseline) and the functional relation. All participants were editorial board members of SCR journals and self-reported that they had published at least five SCR articles. Results suggest that interrater agreement among experts was minimally adequate for both types of decisions (tier, mean kappa = .61; functional relation, mean kappa = .58), and when the CDC was treated as a rater, its mean agreement was similar (mean kappa = .61). On graphs for which there was expert consensus (>80% agreement), the CDC method agreed 97% of the time. Additional secondary findings will be discussed along with implications for training and future research on visual analysis.|
Evaluating a Systematic Visual Analysis Protocol for the Analysis of Single-Case Research
|KATIE WOLFE (University of South Carolina), Erin E. Barton (Vanderbilt University), Hedda Meadan (University of Illinois at Urbana-Champaign)|
Several studies have reported poor agreement among visual analysts. One way to improve reliability may be to standardize the process of visual analysis. To that end, we developed a systematic protocol that consists of a series of questions, and that calculates a score from 0 (no functional relation) to 5 (strong functional relation) based on the analysts responses. The purpose of this study was to evaluate whether the protocol improves reliability compared with a rating scale. To date, 16 students and faculty who have taken a course on single-case research have participated (data collection is ongoing). We randomly assigned participants to the control group (n=9) or the protocol group (n=7). All participants rated 8 single-case graphs using the rating scale (pretest), and then rated the same graphs again using the rating scale or the protocol (posttest). We calculated the intraclass correlation coefficient for each group at each time point. At pretest, agreement was much higher in the control group compared to the protocol group. Both groups reliability improved at posttest, but the change for the protocol group was much larger, indicating that the protocol may improve reliability. Full results will be discussed along with implications for training and future research.