|Procedural Refinements in Skill Acquisition Research|
|Sunday, May 27, 2018|
|4:00 PM–5:50 PM |
|Manchester Grand Hyatt, Grand Hall D|
|Area: AUT/DDA; Domain: Applied Research|
|Chair: Regina A. Carroll (Munroe-Meyer Institute, University of Nebraska Medical Center)|
|Discussant: Dorothea C. Lerman (University of Houston-Clear Lake)|
|CE Instructor: Regina A. Carroll, Ph.D.|
The collection of studies in this symposium will explore how variations in prompting procedures, reinforcement procedures, and mastery criteria can influence the acquisition and maintenance of skills. First, Lauren Schnell will present a study evaluating the use of an individualized assessment to identify the most efficient prompt and prompt fading procedures for individuals with autism. Second, Jennifer Owsiany will present a study examining the use of an abbreviated assessment to compare the effectiveness of consequences for correct responses on skill acquisition during discrete trial instruction. Third, Sarah Richling will present a study assessing the relation between a mastery criterion rule and skill maintenance for individuals with intellectual disabilities. Fourth, Brittany Juban will present a study evaluating a component- or trial-based mastery criterion when teaching a least-to-most prompting strategy to undergraduate students. Finally, Dorothea Lerman will discuss interesting components of each study, and describe future areas of research on skill acquisition.
|Keyword(s): Discrete-Trial Instruction, Mastery criterion, Skill Acquisition|
|Target Audience: |
The target audience is practitioners and researchers in applied behavior analysis
Using Assessment to Identify Learner-Specific Prompt Type and Prompt-Fading Procedures
|Lauren Schnell (Caldwell University), Jason C. Vladescu (Caldwell University), April N. Kisamore (Caldwell University), Ruth M. DeBar (Caldwell University), SungWoo Kahng (University of Missouri), Kathleen Emily Marano (Caldwell University), Casey Nottingham (Caldwell University)|
Assessment plays a vital role in the programming and education of students with autism spectrum disorder (ASD). To date, only a small handful of studies have evaluated the use of assessment to identify the most efficient instructional practices for individuals with ASD. This is problematic as these individuals often have difficulty acquiring skills and the procedures that may be efficient with one individual may not be for others. We conducted individualized instructional assessments to identify the most efficient prompt type (model, partial physical, full physical) and prompt-fading procedure (progressive delay, most-to-least, least-to-most) for teaching auditory-visual conditional discriminations (AVCDs) for individuals with ASD. We determined efficiency by measuring the total number of trials and training sessions required to mastery as well as the total training time and mean training time per mastered target for each of the conditions. Each assessment was conducted at least twice to establish generality. To validate our assessment results, we combined the most efficient and least efficient instructional components into treatment packages applied to teaching a novel set of AVCDS with participants.
Using an Abbreviated Assessment to Identify Effective Consequences for Correct Responses for Individual Learners During Discrete-Trial Instruction
|JENNIFER M OWSIANY (West Virginia University), Regina A. Carroll (Munroe-Meyer Institute, University of Nebraska Medical Center), Natalie Ruth Shuler (West Virginia University)|
Previous research suggests that the type of consequence provided for correct responses during discrete-trial instruction can influence skill acquisition for children with intellectual disabilities. The most effective consequence tends to vary across learners, suggesting the need to conduct individualized assessments. In the current study, we used an abbreviated assessment to compare the effectiveness of consequences for correct responses on skill acquisition for three children with intellectual disabilities. During the abbreviated assessment, we sampled participants' responding with each procedure for up to 60 trials and completed the assessment before participants reached our mastery criterion. Then, we used the results of the abbreviated assessment to predict the most effective and efficient procedure for each participant. Next, we conducted validation assessments, comparing the number of sessions and time required for participants to master targets with each procedure. Finally, we assessed participants' preferences for the different consequences using a concurrent-chains assessment. The results suggest that an abbreviated assessment may be a useful tool for identifying consequences for correct responses that will lead to the quickest skill acquisition for individual learners.
The Effects of Different Acquisition Mastery Criteria on the Skill Maintenance of Children With Developmental Disabilities
|SARAH M. RICHLING (Auburn University), W. Larry Williams (University of Nevada, Reno), James E. Carr (Behavior Analyst Certification Board)|
The demonstration of behavioral acquisition and the maintenance of performance following treatment is fundamental within the fields of behavior analysis and education. The acquisition of skills for individuals with intellectual disabilities and autism has historically focused on the attainment of a certain mastery criterion. The current study involved a survey of clinical practices of Board Certified Behavior Analysts (BCBA's) and Doctoral Board Certified Behavior Analysts (BCBA-D's) which indicates the most commonly utilized mastery criterion is 80% accuracy for three consecutive sessions. Based upon these results, a series of experiments were conducted to evaluate the extent to which the adoption of this mastery criterion rule is correlated with skill maintenance for individuals diagnosed with developmental disabilities and autism spectrum disorders. Results demonstrate that this mastery criterion rule alone may be insufficient for producing maintained effects of skill acquisition programming as early as one week post-acquisition for approximately half of the data sets. Implications and future research are discussed.
A Comparison of Component and Trial-Based Mastery Criteria on Outcomes of Video Modeling Training
|BRITTANY ANN JUBAN (May Institute), Tiffany Kodak (University of Wisconsin–Milwaukee), Jennifer Martin (University of Wisconsin–Milwaukee), Victoria Fletcher (University of Wisconsin–Milwaukee)|
Mastery criteria are used as an objective marker to evaluate when an individual has mastered a skill. It is common to use a criterion of at least 90% correct responding when teaching adults or staff more complex skills (e.g., Deliperi, Vladescu, Reeve, & DeBar, 2015; Gianoumis, Seiverling, & Sturmey, 2012). However, mastery criteria may be misleading when identifying the extent to which an individual has learned a new skill that requires many training or component steps. The purpose of the current study was to evaluate a component- or trial-based mastery criterion when teaching the least-to-most prompting strategy to three undergraduate students. More specifically we evaluated the duration of training and the accuracy of each of the twelve component steps in an error analysis when comparing a mastery criterion of 90% correctly completed steps to 90% correctly completed trials. Results showed that the less stringent criterion of correctly completed steps overestimated mastery of the skill, and error analyses revealed that several critical steps of training were missed. This findings suggests that a more stringent trial-based mastery criterion may be necessary when teaching more complex skills. Clinical implications when choosing a mastery criterion for staff training and future research will be discussed.