Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

49th Annual Convention; Denver, CO; 2023

Event Details


Previous Page

 

Symposium #142
CE Offered: BACB
Staff Training Approaches to Improve Procedural Fidelity
Sunday, May 28, 2023
8:00 AM–9:50 AM
Convention Center 406/407
Area: TBA; Domain: Applied Research
Chair: Haven Sierra Niland (University of North Texas)
Discussant: Tiffany Kodak (Marquette University)
CE Instructor: Tiffany Kodak, M.S.
Abstract:

Procedural fidelity is the degree to which interventions are implemented as prescribed. Behavior analysts may use a variety of tools to train behavior change agents to implement procedures with high fidelity. This symposium includes four studies that evaluated staff training approaches to improve procedural fidelity. Aguilar et al. will present research evaluating the effects of training novice staff to collect procedural-fidelity data on their implementation of differential reinforcement of other behavior (DRO) procedures. Lionello-Denolf et al. will present the effects of teaching staff to use Train to Code software to collect fidelity data on their fidelity with discrete trial instruction (DTI). Lai et al. will present the effects of in-vivo self-monitoring on procedural fidelity for staff implementing DTI. Finally, Bartle et al. will present a comparison of video modeling with exemplars or exemplars and nonexemplars to train staff to implement DTI and preference assessments. Presentations will be followed by a discussion of implications for research and practice.

Instruction Level: Intermediate
Keyword(s): data collection, procedural fidelity, staff training, treatment integrity
Target Audience:

supervising BCBAs and behavior-analytic researchers

Learning Objectives: 1. Define procedural fidelity 2. Describe in-vivo self-monitoring 3. Describe the effects of training staff to collect procedural-fidelity data on implementation
 
Impacts of Collecting Fidelity Data on Subsequent Implementation
MARISELA ALICIA AGUILAR (West Virginia University), Abbie Cooper (West Virginia University), Claire C. St. Peter (West Virginia University)
Abstract: Training staff to implement behavior interventions with high fidelity is critical to client success. Performance feedback is effective at improving fidelity. However, feedback requires a trained professional's time, which may be impractical in some situations. Therefore, the purpose of this study was to evaluate antecedent approaches that may improve the procedural fidelity of novice implementers. We conducted two experiments using a group design and community sample to determine impacts of fidelity data collection on subsequent fidelity of implementation. Across both experiments, participants in both groups watched videos of a resetting differential reinforcement of other behavior (DRO) implemented with varying levels of fidelity. The independent variable was whether the participant collected fidelity data on the therapist’s implementation while they watched videos, and the primary dependent variable was the participant's fidelity when role-playing implementation of the DRO procedure. In Experiment 1, participants received only instructions about how to collect data, and neither group consistently achieved sufficient mastery, suggesting that other training components may be needed. In Experiment 2, we provided more intensive training on data collection. Results from this study inform ways to train novice implementers to implement behavior-change procedures with high fidelity and increase their accuracy of fidelity data collection.
 

Teaching Discrete-Trial Implementation With Train-to-Code

KAREN M. LIONELLO-DENOLF (Assumption University), David A. Eckerman (University of North Carolina, Chapel Hill), Rebecca Hise (Central Massachusetts Collaborative), Elizabeth Pinzino (Central Massachusetts Collaborative), Roger D. Ray ((AI)2, Inc.; Rollins College)
Abstract:

Entry-level personnel (e.g., behavior therapists) frequently deliver discrete trial (DT) programs to students with autism. Because outcomes are related to intervention quality, staff must deliver these programs correctly. The Train to Code (TTC) software teaches observation skills using behavior-analytic adaptive instruction methods. To accomplish this, TTC presents a series of video clips depicting a teacher delivering trials of receptive labeling, social questions, and motor imitation programs. In many clips, the teacher makes an error. By entering a code, the trainee indicates if the teacher correctly followed the steps or what error was made. TTC increases/decreases the degree of prompting given to the trainee based on coding accuracy. The efficacy of using TTC to train staff to deliver DT programs was tested in a pre/posttest and multiple baseline design. Pre-/posttests were role plays in which participants (undergraduates studying applied behavior analysis or behavior therapists) acted as teachers delivering DT programs to a scripted research assistant. Participants completed 1–3 pretests, TTC training, and 1–2 posttests. IOA and procedural integrity on role plays were above 90%. Results indicated substantial improvement in DT delivery at posttest. TTC may be an effective method for training delivery of DT programs in applied settings.

 

Can In-Vivo Self-Monitoring Improve Discrete Trial Instruction (DTI) Implementation?

RAY LAI (University of North Texas and UNT Kristin Farmer Autism Center), Samantha Bergmann (University of North Texas ), Walberto Resendez (University of North Texas), Julia Wang (Texas Academy of Mathematics and Science at the University of North Texas), Katherine Drummond (University of North Texas)
Abstract:

Beneficial consumer outcomes are most likely when behavior-analytic interventions are implemented with high procedural fidelity (i.e., degree to which the procedure is implemented as intended). Video self-monitoring, which involves teaching staff members to monitor their own behavior when watching recordings of themselves, can be used to improve and maintain high procedural fidelity, but video self-monitoring requires additional staff time and resources. In-vivo self-monitoring involves monitoring behavior during or immediately following implementing a behavior-analytic intervention, which could be a cost-effective option, but in-vivo self-monitoring needs additional research. The focus of the current study was to assess the effects of in-vivo self-monitoring with checklists on the procedural fidelity of three behavior technicians implementing discrete trial instruction with children with autism. We used a nonconcurrent multiple baseline across participants design, and our data suggested that in-vivo self-monitoring was effective for two of the three participants. Procedural fidelity for the third participant did not increase with in-vivo or video self-monitoring. Results of this evaluation were used to inform the supervisors about the efficacy of self-monitoring for each staff member, and implications of using self-monitoring in practice will be discussed.

 
The Effects of Training Containing Different Exemplar Types on Procedural Integrity
GRACE ELIZABETH BARTLE (University of Kansas), Sandra Alex Ruby (University of Kansas), Florence D. DiGennaro Reed (University of Kansas)
Abstract: This study evaluated the efficacy of training containing different exemplar types on procedural integrity of discrete trial teaching (DTT) and a multiple stimulus without replacement (MSWO) preference assessment. A multi-element comparison design was used to evaluate the effects of training that used video modeling with voiceover instruction with exemplars only (E) or both exemplars and nonexemplars (E+NE). Both types of training increased procedural integrity for all participants, which maintained for two and five weeks. Mean posttraining integrity for the E condition was 87.9% and E+NE condition was 96.8%. Follow-up performance for the E condition was 90% and for E+NE was 96.7%. These data provide preliminary evidence that there may be benefits to incorporating nonexemplars into training, which required roughly 2 min of additional training time. Despite this finding, it is possible that participants found MSWO to be more difficult to implement than DTT, which could influence the results. Finally, all participants reported satisfaction with the trainings.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":true,"interval":86400000,"timeout":20000,"url":"https://saba.abainternational.org/giving-day/","saba_donor_banner_html":"Your donation can make a big impact on behavior analysis! Join us on Giving Day.","donate_now_text":"Donate Now"}