Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.


48th Annual Convention; Boston, MA; 2022

Event Details

Previous Page


Symposium #190
CE Offered: BACB
Procedural Integrity: Current Practices and Areas for Improvement
Sunday, May 29, 2022
8:00 AM–9:50 AM
Meeting Level 2; Room 204A/B
Area: TBA; Domain: Applied Research
Chair: Stephanie Hope Jones (Salve Regina University)
Discussant: Timothy R. Vollmer (University of Florida)
CE Instructor: Stephanie Hope Jones, Ph.D.

Procedural integrity (i.e., the extent to which procedures are implemented as designed) is an important area of research and practice in behavior analysis. This symposium includes four data-based presentations centered around procedural integrity. The presentations will inform selecting a measurement system for procedural integrity, how integrity errors affect common interventions, how frequently researchers report integrity measures in behavior-analytic articles, and perceived barriers and facilitators that influence reporting integrity described by behavior-analytic researchers.

Instruction Level: Intermediate
Keyword(s): procedural fidelity, procedural integrity, treatment integrity
Target Audience:

Supervising BCBAs Behavior-analytic researchers

Learning Objectives: 1. Describe considerations for selecting specific integrity measurement systems. 2. Describe impacts of integrity errors on common reinforcement-based interventions. 3. Describe current trends in how integrity is discussed in behavior-analytic literature. 4. Describe considerations regarding reporting integrity in behavior-analytic research.
Through the Looking Glass and What We Found: Evaluating Multiple Treatment Integrity Measures
HAVEN SIERRA NILAND (University of North Texas), Valeria Laddaga Gavidia (University of North Texas; Kristin Farmer Autism Center), Samantha Bergmann (University of North Texas ), Marcus Daniel Strum (University of North Texas), Marla Baltazar (University of North Texas), Bonnie Yuen (University of North Texas), Mike Harman (Briar Cliff University)
Abstract: Treatment integrity is the extent to which interventions are implemented as prescribed (Gresham et al., 2000). Behavior analysts should use treatment integrity data to inform programming decisions, evaluate the quality of intervention implementation, and guide training of behavior-change agents. However, there is no standard measure for collecting treatment integrity data. Multiple options for measurement exist, and each may vary in their utility and efficiency. The present study compared two measures (Likert rating scales and occurrence/non-occurrence scores) of treatment integrity using videos of discrete-trial instruction with a child with autism spectrum disorder. An analysis of overall session, by trial, and by component integrity scores was conducted using each measure. Comparative analyses suggest that treatment integrity measures differ in the specificity of information gathered, degree to which intervention components were reported as implemented correctly, reliability between raters, and time to completion. Implications of these results for treatment integrity data collection by researchers and practitioners will be discussed.
Interactive Effects of Treatment Type, Schedule Value, and Treatment Integrity on Treatment Outcomes
OLIVIA HARVEY (West Virginia University), Claire C. St. Peter (West Virginia University), Stephanie Hope Jones (Salve Regina University), Christa Lilly (West Virginia University), Kristian Kemp (West Virginia University)
Abstract: Effects of treatment-integrity failures have typically been studied with dense, fixed-ratio differential reinforcement of alternative behavior (DRA), but these schedules may not be typical in clinical practice. We evaluated impacts of 80% integrity on three common interventions (DRA, DRO, and noncontingent reinforcement [NCR]) implemented at four different schedule values (1, 5, 10, and 20). Fifteen undergraduates participated. Participants were randomly assigned to one of three groups that varied based on intervention type. Regardless of group assignment, each participant experienced all four schedule values and both full and reduced integrity in a reversal design. During the experiment, participants clicked on moving circles on a computer screen and earned points as reinforcers. Preliminary results suggest effects of 80% integrity differed across intervention types and schedule values. Integrity failures had a more detrimental effect on DRO than DRA or NCR, with loss of treatment effects even at 80% integrity. These results suggest that practitioners should be cautious in the use of DRO and NCR schedules, and that implementation at 80% integrity may be insufficient to promote successful treatment outcomes.

Procedural Integrity Reporting in the Journal of Applied Behavior Analysis, 2006-2020

BRIAN LONG (West Virginia University), Denys Brand (California State University, Sacramento), Samantha Bergmann (University of North Texas ), Claire C. St. Peter (West Virginia University), Marcus Daniel Strum (University of North Texas), Cody Lane McPhail (University of North Texas)

Procedural integrity describes the extent to which a procedure is implemented as designed. Although scholars have called for consistent inclusion of integrity data since the 1980s, integrity measures remained infrequent through the early 2000s, and evaluations of the details of integrity reporting were not conducted. Therefore, the purpose of this study was to characterize procedural integrity reporting in the Journal of Applied Behavior Analysis (JABA) between 2006 and 2020. We identified 649 experiments published in JABA that mentioned the terms integrity or fidelity. For each experiment, we first determined how authors described integrity. Then, for experiments that collected integrity data, we gathered data on the extent to which authors reported how frequently integrity data were collected, how they calculated integrity values, and the details of the obtained integrity coefficients. Most coded studies measured integrity of the independent variable and there was a slight upward trend from 2006 to 2020. We also noted increasing trends in descriptions of the frequency of integrity-data collection and how integrity values were calculated. Obtained integrity values were almost always reported as percentages, and above 90%. These findings suggest promising trends, but suggest the need for continued growth in the completeness of integrity reporting in JABA.

Perceived Barriers and Facilitators to Reporting Procedural Integrity Data in Behavior-Analytic Research
STEPHANIE HOPE JONES (Salve Regina University), Denys Brand (California State University, Sacramento), Lodi Lipien (University of South Florida ), Claire C. St. Peter (West Virginia University), Jennifer Wolgemuth (University of South Florida)
Abstract: Researchers have called for increased reporting of procedural fidelity data in published studies. Although studies have reported moderate increases in reporting, recently collected data suggests that reporting procedural fidelity data is not uniformly done in behavior-analytic research. To understand barriers and facilitators to reporting procedural fidelity data, we conducted six focus groups and one 1:1 interview with behavior-analytic researchers who publish in the Journal of Applied Behavior Analysis. We conducted qualitative data analysis and identified common patterns and themes for facilitators and barriers to reporting procedural fidelity data in behavior-analytic research.



Back to Top
Modifed by Eddie Soh