Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

49th Annual Convention; Denver, CO; 2023

Event Details


Previous Page

 

Symposium #381
CE Offered: BACB
Recent Advances in Measurement and Data Analysis in Adapted Alternating Treatments Design and Latency-Based Functional Analysis
Monday, May 29, 2023
3:00 PM–3:50 PM
Hyatt Regency, Centennial Ballroom H
Area: PCH; Domain: Applied Research
Chair: Maya Fallon (University of Nebraska Medical Center's Munroe-Meyer Institute)
CE Instructor: Maya Fallon, Ph.D.
Abstract:

Measuring and analyzing assessment and treatment outcomes are essential components in applied behavior analysis. Therefore, research must continue exploring current and alternate measurement and data analysis methods. This symposium will highlight recent advances in interpreting results in an adapted alternating treatments design (AATD) and measurement in a latency-based functional analysis. Two presentations focus on discovering the conditions under which conclusions can be drawn from an AATD. The first summarizes research in the Journal of Applied Behavior Analysis to better understand how researchers interpret data obtained from AATDs. The second presents new evidence regarding the natural variability in acquisition when teaching procedures are identical and discusses results that may influence how we make conclusions regarding outcomes from AATDs. The third presentation examines the use of percentage goal obtained as a measurement of response strength and existing trends in latency during a latency-based functional analysis.

Instruction Level: Intermediate
Keyword(s): AATD, experimental design, functional analysis, Latency
Target Audience:

The target audience for this session includes behavior analysis students and practicing behavior analysts who are familiar with the adapted alternating treatments design and latency-based functional analysis.

Learning Objectives: After the presentations, participants will be able to: (1) discuss similarities and differences in how researchers make conclusions based on data obtained from AATDs in the Journal of Applied Behavior Analysis, (2) identify the key components of an adapted alternating treatments design and recognize the implications of the use of different mastery criteria regarding the efficiency of an intervention, and (3) understand effect size calculation for single-subject research design using percent of goal obtained and describe its relevance for latency functional analysis data interpretation.
 

“A Critical Evaluation of Adapted Alternating Treatment Designs” to “A Critical Evaluation of Adapted Alternating Treatments Designs”

PAIGE O'NEILL (University of Nebraska Medical Center - Munroe-Meyer Institute), Emily Ferris (University of Nebraska Medical Center; Munroe Meyer Institute), Nicole Pantano (Assumption University), Madison Judkins (University of Nebraska-Medical Center Munroe-Meyer Institute), Nicole M. Rodriguez (University of Nebraska Medical Center's Munroe-Meyer Institute), Catalina Rey (University of Nebraska Medical Center's Munroe-Meyer Institute)
Abstract:

Adapted alternating treatments designs (AATDs) are commonly used to compare the efficiency of instructional procedures. Each teaching method is applied to a set of instructional items and differences between methods are demonstrated when acquisition of one set is more rapid than another and when the effect is consistent across sets or participants. Although AATDs are widespread, there is currently no minimum standard for either the amount of difference that should be observed or the number of replications that should be achieved before concluding that one procedure is superior to other(s). In this study, we reviewed articles published in the Journal of Applied Behavior Analysis that used AATDs and coded them for the research design variation, mastery criteria used, within- and across-subject replication (whether replication was sought and subsequently achieved), and author conclusions about obtained results. Results of this study highlight variations in the conclusions derived from data obtained from AATDs.

 
Toward a Better Understanding of Meaningful Differences in the Adapted Alternating Treatment Design
EMILY FERRIS (University of Nebraska Medical Center; Munroe Meyer Institute), Paige O'Neill (University of Nebraska Medical Center - Munroe-Meyer Institute), Nicole Pantano (Assumption University), Madison Judkins (University of Nebraska-Medical Center Munroe-Meyer Institute), Nicole M. Rodriguez (University of Nebraska Medical Center's Munroe-Meyer Institute), Catalina Rey (University of Nebraska Medical Center's Munroe-Meyer Institute)
Abstract: The adapted alternating treatment design (AATD) is a frequently implemented small-N research design used to evaluate the efficiency and efficacy of teaching procedures. AATD rapidly alternates two or more interventions using unique sets of instructional targets assigned to each intervention. Despite a growing body of research using AATD, one basic tenet of the design remains untested: What is the natural variability in rate of acquisition of matched instructional sets when there is no difference in teaching procedures? In the current study, eight children diagnosed with autism were taught to read sight words or receptively or expressively identify pictures of common items using identical teaching procedures across all targets. Targets were equated for difficulty using logical analysis procedures. The AATD design was embedded in a multiple-probe-across-sets design. Preliminary results show a notable amount of variability in the number of sessions to mastery, despite identical teaching procedures. These findings suggest that there should be a considerable, consistent difference in the number of sessions to mastery to conclude there was a difference in efficiency between treatments and underscore the importance of within-subject replication. These findings could help inform what is considered a meaningful difference between conditions for future research using AATD.
 
Preliminary Examination for Using Percent of Goal Obtained as Indication of Response Strength Across Functional Analysis Sessions
JUSTIN BOYAN HAN (University of South Florida), Sarah E. Bloom (University of South Florida), John Ferron (University of South Florida)
Abstract: Current methodology for calculation of effect size in single-subject research design favors the usage of non-parametric measures like Tau-U, nonoverlap of all pairs (NAP), and percentage of all non-overlapping data (PAND). These methods are primarily used in literature reviews where estimates on effectiveness of an independent variable are accumulated and discussed. However, these non-parametric measures are unable to account for magnitude of change, which can be crucial when we are interested in comparison of treatment effectiveness. To potentially address this issue, Ferron et al. (2020) proposed the percentage of goal obtained (PoGO) as a method of indexing effect-size with considerations of level of target response. The current project aims to examine the feasibility of using PoGO as a measurement of response strength across sessions in latency-based functional analysis. Additionally, we examined existing trends in latency across exposures to test conditions within latency-based functional analyses. Our data suggest that PoGO may be a reliable way to calculate effect size in latency-based FA and could be useful in assessment of trend across FA sessions. Implications of our findings are discussed.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE