Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

51st Annual Convention; Washington DC; 2025

Event Details


Previous Page

 

Symposium #454
CE Offered: BACB/IBAO
Parametric and Comparative Analyses of Procedural Fidelity Errors
Monday, May 26, 2025
4:00 PM–4:50 PM
Convention Center, Street Level, 150 AB
Area: EAB/OBM; Domain: Translational
Chair: Paige O'Neill (University of Nebraska Medical Center - Munroe-Meyer Institute)
CE Instructor: Paige O'Neill, M.A.
Abstract: Errors in the implementation of behavioral interventions can negatively impact treatment efficacy, and better understanding extent of these effects is crucial to promoting best outcomes. This symposium will present three studies examining the impact of procedural fidelity errors on behavior reduction and skill acquisition interventions. The first study will present a parametric evaluation of procedural fidelity level (i.e., 100%, 80% 60%, 40%, and 20%) during differential reinforcement of other behavior with combined commission and omission errors. Results demonstrate that DRO can withstand some degree of fidelity error without losing efficacy. The second will compare of the efficacy of differential reinforcement of alternative behavior implemented in both extended periods of high/low (i.e., 100%/50%) fidelity sessions relative to rapidly alternating sessions of high/low fidelity. Findings suggest that DRA may remain effective when errors are interspersed, highlighting the role of experimental design in mitigating fidelity issues. The final study will present a parametric evaluation of varied and interspersed levels of procedural fidelity (i.e., 100%, 67%, 33%) during acquisition of match-to-sample skills with an examination of skill acquisition when procedural fidelity errors are corrected. Results show that procedural errors can hinder progress, but correcting these errors can still lead to task mastery.
Instruction Level: Intermediate
Keyword(s): human operant, procedural fidelity, translational research
Target Audience: graduate students, researchers, BCBAs
Learning Objectives: 1. describe the impact of varying levels of procedural fidelity on the efficacy of behavioral interventions
2. describe the influence of patterns of procedural fidelity on the efficacy of behavioral interventions
3. discuss how findings from research apply to behavior interventions in applied settings
 

Parametric Evaluation of Treatment Integrity Level During Differential Reinforcement of Other Behavior (DRO)

(Applied Research)
PAIGE O'NEILL (University of Nebraska Medical Center - Munroe-Meyer Institute), Catalina Rey (University of Nebraska Medical Center's Munroe-Meyer Institute), Thomas Eilers (University of Nebraska - Omaha)
Abstract:

Differential reinforcement of other behavior (DRO) is a strategy commonly used to address challenging behavior but can be difficult to implement with high treatment integrity. While it is ideal that behavioral treatments are implemented with perfect treatment integrity, previous research has demonstrated that treatment effects can be observed even with occasional treatment integrity errors. However, the impact of various degrees of treatment integrity on outcomes of DRO is still unclear. The purpose of the current study was to conduct a parametric analysis of treatment integrity level (100%, 80%, 60%, 40%, 20%) in DRO in a human operant arrangement using a multiple treatments reversal design. Participants engaged with a computer program in which mouse-clicks on a moving button on the screen served as the target response and a proxy for challenging behavior. Rate of target responding was compared across baseline, DRO with 100% treatment integrity, and during reduced-integrity DRO (i.e., 80%., 60%, 40%, and 20% treatment integrity). Results indicate that DRO may be robust despite some level of treatment integrity error. These findings will inform future evaluations of treatment integrity errors in DRO and may inform mediator training practices.

 

Impacts of Experimental Design on Outcomes of Differential Reinforce of Alternative Behavior With Reduced Procedural Fidelity

(Basic Research)
OLIVIA HARVEY (West Virginia University), Claire C. St. Peter (West Virginia University)
Abstract:

Differential Reinforcement of Alternative Behavior (DRA) effectively reduces challenging behavior by reinforcing an alternative behavior. However, its efficacy can be compromised if implemented with errors. With other procedures (e.g., RIRD, DRO), interspersing high- and low-fidelity sessions has reduced negative effects of fidelity errors, but this has not yet been demonstrated with DRA. Therefore, the purpose of the present study was to identify how experimental design (phase-wide errors or interspersed high- and low-fidelity) impacted responding when fidelity was reduced. To address this question, we used a within-subject translational model. Undergraduate students engaged in arbitrary responses on a computer-based task that were analogous to challenging and appropriate behavior. Participants experienced three conditions: baseline, DRA 100%, and DRA 50%. During the phase-wide errors, participants experienced an extended period of DRA 100% followed by an extended period of DRA 50%. During the interspersed high- and low-fidelity phase, participants rapidly alternated between the DRA 100% and DRA 50% conditions. Fidelity errors had differential effects based on the experimental design. In some cases, DRA was still effective when implemented with interspersed errors.

 

A Parametric Analysis of Varying Levels of Consequence-Based Procedural Fidelity Errors When Learning New Tasks

(Basic Research)
SYDNEY BURLISON (California State University, Sacramento), Denys Brand (California State University, Sacramento), Joshua Bensemann (The University of Auckland, New Zealand)
Abstract:

A limitation of the parametric literature regarding procedural errors is that individual stimulus sets or tasks are typically implemented with one level of fidelity throughout a study. Currently, no studies have investigated how implementing procedures with varying levels of fidelity affects learning. In addition, a small body of research indicates that a history of errors may impact future learning (e.g., Bergmann et al., 2021; Hirst & DiGennaro Reed, 2015). In Experiment 1, we varied the levels of procedural fidelity across one of three stimulus groups during a matching-to-sample task. Subsequently, we implemented a 100% fidelity phase for the varying group of stimuli to evaluate how initially exposing learners to varying levels of fidelity affects ongoing performance once errors are corrected. In Experiment 2, we systematically replicated Experiment 1 and included an incentive component to further investigate how including a cumulative point system and the opportunity to obtain additional reinforcers would affect learning. Overall, the results showed that the varying levels with which procedural errors are introduced can be detrimental to learning. However, when errors are corrected, individuals can master the task.

 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":false}