|
An Overview of Common Effect Size Measures Used in Single-Case Research Design: Log Response Ratios, Hedges' g, and Multiple Regression-Based Effect Sizes |
Sunday, May 26, 2019 |
5:00 PM–5:50 PM |
Swissôtel, Concourse Level, Zurich E-G |
Area: EAB/AUT; Domain: Translational |
Chair: Donald A. Hantula (Temple University) |
CE Instructor: Art Dowdy, Ph.D. |
Abstract: Single-case designs are one of the main tools used for evaluating applied behavioral interventions. Although single-case studies can be highly informative about the efficacy of an intervention for the individual participants, single studies provide a limited basis for generalization. The tools of research synthesis, meta-analysis, and effect sizes provide a stronger basis for establishing evidence-based practices and drawing broader, more defensible generalizations than what is possible from single studies considered separately. For single-case studies that use systematic direct observation of behavior to measure behavioral outcomes, response ratios, hedges' g, and multiple-regression based effect sizes are often used. We provide a general overview of each, benefits and drawbacks when using with single-case studies, along with intuitive ways to calculate each effect size. |
Instruction Level: Intermediate |
Keyword(s): Effect sizes, Evidenced-Based, Meta-Analysis, SCRD |
Target Audience: Researchers |
|
Challenge and Convention: Effect Sizes in Multiple Regression |
(Theory) |
ELIZABETH KYONKA (University of New England) |
Abstract: Psychologists who operationalize constructs must report standardized effect size statistics because the observations themselves are abstractions. A score of 16 on an impulsivity questionnaire that is an aggregation of responses to several Likert-scale items does not indicate that the respondent “has” 16 points of impulsivity. In behavior analysis, dependent variables tend to be more concrete. A change in the number of times a key was pecked, problem behavior occurred, or the correct mand was provided are meaningful without standardization. However, even in those cases when the dependent variable is a behavior, standardized effect sizes are valuable because they make comparison across subjects and across studies possible. Behavior analysts who conduct single-subject research with continuous predictors must deal with all of the issues surrounding continuous predictors as well as those relating to single-subject designs. There are many options and few standards for reporting standardized effect sizes for continuous predictors. Possible intercorrelations between sequential observations and sphericity must be addressed carefully in all single-subject research. In combination, these challenges make reporting unbiased and interpretable measures of effect size (standardized or not) difficult, but the end results are worth the effort. |
|
Response Ratio Effect Sizes: Methods for Single-Case Designs With or Without Treatment-Phase Time Trends |
(Theory) |
JAMES ERIC PUSTEJOVSKY (University of Texas at Austin) |
Abstract: Single-case designs are one of the main tools used for evaluating applied behavioral interventions. Although single-case studies can be highly informative about the efficacy of an intervention for the individual participants, single studies provide a limited basis for generalization. The tools of research synthesis, meta-analysis, and effect sizes provide a stronger basis for establishing evidence-based practices and drawing broader, more defensible generalizations than what is possible from single studies considered separately. For single-case studies that use systematic direct observation of behavior to measure behavioral outcomes, response ratios are a simple and intuitive way to quantify effect sizes in terms of proportionate change from baseline. This presentation will review recently developed methods and tools for estimating response ratio effect sizes. Methods will be described for the simple scenario where the level of the outcome is constant within each phase and for the more challenging scenario where treatment has gradual effects, which build up and dissipate over time. The presentation will highlight interactive web-based tools for calculating response ratios under both scenarios. |
|
Determining Effect Sizes Using Hedges' g in Single-Case Research Design Based Meta-Analyses |
(Theory) |
ART DOWDY (Temple University) |
Abstract: In single-case research design (SCRD), experimental control is demonstrated when the researcher’s application of an intervention, known as the independent variable, reliably produces a change in behavior, known as the dependent variable, and the change is not otherwise explained by confounding or extraneous variables. Recently, researchers and policy organizations have identified evidence-based practices (EBPs) for children with autism spectrum disorder (ASD) based on systematic reviews and meta-analyses of SCRD studies (e.g., Odom, Collet-Klingenberg, Rogers, & Hatton, 2010). Effect sizes determined from SCRD meta-analyses allow for a sound basis when determining EBPs. A popular ES determined from SCRD meta-analyses is Hedges' g. This presentation will review Hedges' g, the benefits and limitations, and an intuitive way to calculate the ES once SCRD data has been extracted. |
|
|