|Assessing Treatment Integrity and Staff Training When Delivering Behavioral Services to Individuals With Disabilities|
|Monday, May 30, 2016|
|10:00 AM–11:50 AM |
|Columbus Hall GH, Hyatt Regency, Gold East|
|Area: AUT/TBA; Domain: Applied Research|
|Chair: Denys Brand (University of Kansas)|
|Discussant: Florence D. DiGennaro Reed (University of Kansas)|
|CE Instructor: Layla Abby, Ph.D.|
Treatment integrity is the extent to which interventionists implement treatment procedures as prescribed. Treatment integrity errors can adversely affect treatment effectiveness and impede learning for individuals with disabilities such as autism spectrum disorder. It is important to develop and refine methods of identifying treatment integrity errors and teaching staff to implement procedures consistent with their prescribed protocols. This symposium focuses on describing procedures for assessing treatment integrity, and for teaching staff to implement procedures as prescribed. Brand will present the results from a study in which a conditional probability analysis identified and quantified between-trials treatment integrity errors that occurred during discrete-trial teaching error-correction procedures. Abby will detail a study focused on the use of enhanced data sheets when assessing treatment integrity in conditional discrimination training. Luck will describe a study comparing the effectiveness of and preference for three different feedback strategies when training special education teachers (written, vocal, & video-feedback). Coln-Kwedor will present the results from two studies containing a comprehensive analysis of treatment integrity with response interruption and redirection in both naturalistic and experimental settings. These methods were effective in identifying treatment errors and teaching procedures to staff.
Assessing the Treatment Integrity of Discrete-Trial Teaching Error-Correction Procedures Using Conditional Probabilities
|DENYS BRAND (University of Kansas), Douglas Elliffe (University of Auckland), Florence D. DiGennaro Reed (University of Kansas)|
Treatment integrity is the extent to which interventionists implement treatment procedures consistent with their prescribed protocol. Research shows that DTT is most effective when administered with high levels of treatment integrity. The majority of treatment integrity research involving discrete-trial teaching (DTT) focuses on treatment integrity on a within-trial basis. However, treatment integrity errors can also occur on a between-trials basis. The aim of the study was to use conditional probability matrices to identify and quantify between-trials treatment integrity errors occurring during error-correction procedures. We video-recorded therapy sessions for three therapist-learner dyads at the time and place where DTT sessions usually took place. The learners were children with autism spectrum disorders who received DTT as part of their regular teaching programs. The conditional probability matrices were effective in identifying and quantifying error-correction treatment integrity errors. Also, we found that high levels of within-trial treatment integrity did not correspond to high levels of treatment integrity for the error-correction procedures.
Effects of Standard and Enhanced Data Sheets on Implementation of Conditional Discrimination Training
|LAYLA ABBY (Trumpet Behavioral Health), Linda A. LeBlanc (Trumpet Behavioral Health), Justin B. Leaf (Autism Partnership Foundation), Joseph H. Cihon (University of North Texas)|
Green (2001) and Grow and LeBlanc (2012) described practice recommendations for conducting conditional discrimination training. Grow and LeBlanc provide an example of a data sheet with the preset target stimulus for each trial along with a counterbalanced three-item array of comparison stimuli to enhance the accuracy of implementation of the recommended practices. The current study evaluated the recommendation that the enhanced data sheet might lead to higher procedural integrity on these practice recommendations compared to a standard data sheet (i.e., targets and arrays are not pre-set). Behavior therapists from two provider agencies were randomly assigned to the standard data sheet or the enhanced data sheet (e.g., Grow & LeBlanc example) condition. Participants watched a short video on the practice recommendations for a matching task and an orientation to the data sheet for the assigned condition. He or she then used the assigned data sheet while implementing the matching task with a confederate serving in the role of the child with autism. Currently, 22 participants have completed the study (11 per condition) and the final sample will include 40 total participants. The enhanced data sheet produced higher procedural implementation on each of the four targeted practice recommendations.
A Comparison of Written, Vocal, and Video Feedback When Training Teachers
|KALLY LUCK (University of Houston - Clear Lake), Dorothea C. Lerman (University of Houston-Clear Lake), Danielle Dupuis (The University of Houston-Clear Lake), Wai-Ling Wu (University of Houston-Clear Lake), Louisa Hussein (The University of Kansas)|
This study compared the effectiveness of and preference for three different feedback strategies when training six special education teachers during a 5-day summer training program. In Study 1, teachers received written or vocal feedback while learning to implement two different types of preference assessments (paired-stimulus and multiple-stimulus-without-replacement). Written feedback was more effective than vocal feedback for three teachers and vocal feedback was more effective than written feedback for two teachers. In Study 2, we compared the most effective feedback strategy from Study 1 to video-assisted feedback while training the teachers to implement two forms of discrete trial training, one involving least-to-most prompting and the other involving most-to-least prompting. Video-assisted feedback was the most effective method for three teachers and vocal feedback was the most effective for one teacher. However, vocal feedback was the most preferred method for all of the teachers. Results have important implications for the use of feedback with teachers.
|An Analysis of Treatment Integrity of Response Interruption and Redirection|
|CANDICE COLON-KWEDOR (Western New England University & The May Institute), William H. Ahearn (New England Center for Children)|
|Abstract: Response Interruption and Redirection (RIRD) has been shown to effectively decrease stereotypy but its application outside an experimental setting has not been well studied. In Experiment 1, decreases in automatically-maintained vocal stereotypy were obtained following RIRD treatment in a controlled setting for three participants diagnosed with an autism spectrum disorder (ASD). Descriptive data on the consistency and accuracy of treatment implementation were then collected in the classroom setting. Results showed that treatment implementation varied across participants (i.e., Participant 1, M=60.0%; Participant 2, M=89.7%; Participant 3, M= 41.1%) and across staff members (range, 0-100%). Failure to implement the treatment was the most common error. However, when RIRD was implemented the components were carried out as prescribed with high integrity. In Experiment 2, three participants were exposed to a parametric analysis in a controlled setting. The results indicated that RIRD was effective at 50% treatment implementation or higher. Furthermore, when 25% implementation was interspersed with booster sessions at 100% treatment effects were also maintained. An evaluation of the RIRD procedure in the clinical setting is discussed.|