|
There's More Than One Tool in the Toolkit: Statistics for Behavior Analysts |
Saturday, May 26, 2018 |
10:00 AM–10:50 AM |
Manchester Grand Hyatt, Harbor Ballroom D-F |
Area: TBA/PCH; Domain: Translational |
Chair: Zachary H. Morford (Zuce Technologies, LLC) |
CE Instructor: Zachary H. Morford, Ph.D. |
Abstract: The field of behavior analysis has traditionally eschewed the use of statistical tests in the analysis of single-case experimental design (SCED) data. In particular, behavior analysts have argued against parametric statistics (e.g., t-tests and ANOVAs) for multiple reasons. Rather than use statistical tests, behavior analysts have relied upon inter-ocular trauma tests, where the visual analysis of SCED results hits you right between the eyes. The field has, generally speaking, overlooked the fact that parametric tests are only a few hammers in a much larger toolkit of statistical procedures. It is possible and beneficial for behavior analysts to use both methods—visual analysis and statistical tests—in conjunction with one another to analyze their data. In this symposium, the presenters will review three different non-parametric statistical tests that can be used in basic and applied behavior analytic research: Randomization tests, general estimating equations (GEE), and multilevel modeling. Each has its own unique merits and uses within SCEDs, and can function to augment other methods of analysis and replace more commonly used statistical tests. |
Instruction Level: Basic |
Keyword(s): Research methods, Single-case designs, Statistics, Visual analysis |
Target Audience: Master's level and doctoral level BCBAs; Graduate Students in Behavior Analysis; Basic Researchers; Applied Researchers; Scientist-Practitioners |
Learning Objectives: At the conclusion of the presentation, participants will be able to: (1) select statistical tests that supplement visual analysis; (2) design single-case experiments for the purposes of applying statistical tests to the data acquired; (3) increase the internal validity of single-case designs by randomly assigning treatments to observation occasions. |
|
Randomize, Test, Re-Randomize, and Infer: A Statistical Test for Single-Case Designs |
(Basic Research) |
KENNETH W. JACOBS (University of Nevada, Reno), Linda J. Parrott Hayes (University of Nevada, Reno) |
Abstract: The frequent and repeated measurement of behavior often precludes behavior analysts from making statistical inferences about data obtained from single-case experimental designs (SCED). Parametric tests assume a random sample, independent observations, and a normal distribution. SCEDs violate one or more of these assumptions, and even worse, are considered quasi-experimental because subjects are not randomly sampled from a defined population or randomly assigned to treatments. Recent advances in computing, however, have brought an old and readily applicable test of significance to fore: Randomization Tests (Fisher, 1935; Pitman, 1937). Unlike conventional Null Hypothesis Significance Tests (NHST), randomization tests are non-parametric, distribution-free tests of statistical significance. They are particularly applicable to SCEDs, so long as treatments are randomly assigned to observations. The requirement that SCEDs include random assignment increases their methodological rigor by controlling for unknown variables and addresses the charge that SCEDs are quasi-experimental. While randomization tests cannot supplant the experimental control already built into SCEDs, they can certainly supplement the conclusions behavior analysts might make about treatment effects. The purpose of this presentation, then, is to elucidate the origins of randomization tests, explicate their applicability to SCEDs, and warn against the pitfalls of NHSTs when making inferences. |
|
Using Multilevel Modeling to Quantify Individual Variability in Single-Subject Designs |
(Basic Research) |
WILLIAM DEHART (Virginia Tech Carilion Research Institute), Jonathan E. Friedel (National Institute for Occupational Safety and Health), Charles Casey Joel Frye (Utah State University) |
Abstract: The field of Behavior Analysis has historically been conflicted over the use of inferential statistical methods in the analyses of data from single-subject designs. Valid concerns with the use of inferential statistics include the suppression of important behavioral variability at the individual level and the over-reliance on and misinterpretation of the p-value. This conflict has commonly resulted in two strategies: first, reliance on visual analyses and the out-right rejection of any inferential statistics, or second, the application of more "basic" inferential tests that may or more not be appropriate for single-subject design data. Multilevel modeling (e.g., mixed-methods or hierarchal regression) is a more advanced statistical analysis that addresses many of the concerns that the field of Behavior Analysis has with inferential statistics including quantifying the contribution of individual behavioral variability to the results and the compression of many data-points into a single comparison. The benefits of multilevel modeling will be demonstrated using several single-subject design datasets. A guide of how researchers can implement multilevel modeling including a priori recommendations before beginning data collection will be offered. |
|
Comparing General Estimating Equations to Standard Analytic Techniques for Delay Discounting Data |
(Basic Research) |
JONATHAN E. FRIEDEL (National Institute for Occupational Safety and Health), William DeHart (Virginia Tech Carilion Research Institute), Yusuke Hayashi (Penn State Hazleton), Anne Foreman (National Institute for Occupational Safety and Health), Oliver Wirth (National Institute for Occupational Safety and Health, Centers for Disease Control and Prevention), Amy Odum (Utah State University) |
Abstract: Delay discounting continues to be a rapidly growing area both within behavior analysis and in other fields, in part because differences in the degree of discounting are routinely found across populations of interest. There are often acknowledged and unacknowledged challenges in analyzing delay discounting data because the data frequently violate the assumptions of the statistical tests or there are no appropriate equivalent non-parametric tests. General estimating equations (GEE) are regression techniques that can handle many of the difficulties associated with delay discounting data. Using an iterative Monte Carlo procedure with simulated choice data sets, the results obtained with GEEs were compared to the results obtained with traditional analyses (e.g. t-tests, ANOVAs, Mann-Whitney U, etc.) to assess the similarities and differences in the techniques. The GEEs and traditional techniques produced similar patterns of results; however, GEEs obviate the need for conducting multiple tests, tolerate violations of normality, and account for within-subject correlations making GEEs a viable and more flexible approach for analyzing choice data. |
|
|