Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Search

48th Annual Convention; Boston, MA; 2022

Event Details


Previous Page

 

Symposium #33
CE Offered: BACB
Factors Influencing Preference and Reinforcer Assessment Outcomes
Saturday, May 28, 2022
10:00 AM–11:50 AM
Meeting Level 2; Room 251
Area: DDA; Domain: Applied Research
Chair: Pamela L. Neidert (The University of Kansas)
Discussant: Iser Guillermo DeLeon (University of Florida)
CE Instructor: Pamela L. Neidert, Ph.D.
Abstract: A substantial body of literature exists demonstrating the use of reinforcement to increase a wide range of socially important behaviors in numerous populations across a range of settings. Systematic preference assessments are effective and efficient methods for identifying stimuli that serve as reinforcers, and numerous studies have demonstrated the predictive validity of a variety of assessment methods. As a result, systematic preference and reinforcer assessments have become a standard feature of both research and clinical practice. However, it has also been shown that numerous factors can influence preference assessment outcomes (presentation method, response requirements, assessment duration, consequence arrangements, etc.) and reinforcement effects (reinforcement parameters and type of schedule arranged during the assessment). The purpose of this symposium is to present the results of four studies examining the influence of a number of these factors. Findings will be discussed in terms of implications for both researchers and practitioners.
Instruction Level: Intermediate
Keyword(s): preference assessment, reinforcer assessment
Target Audience: * experience conducting preference and/or reinforcer assessments * intermediate conceptual knowledge of basic concepts & principles of behavior analysis
Learning Objectives: At the conclusion of the presentation, participants will be able to: (1) explain the approach to evaluating the reliability and predictive validity of alternate preference assessment modalities; (2) comment on the extent to which session-end criteria can influence break points obtained for individuals responding on progressive-ratio schedules; (3) describe why practitioners may gain the same information from conducting half the number of trials of a paired-stimulus-preference assessment as conducting all trials; and (4) tact that that increasing response requirements during paired-stimulus and multiple-stimulus preference assessments may not result in systematic and reliable shifts in preference hierarchies.
 
Does Adding Effort to Preference Assessment Alter the Conclusions
Tracy Argueta (University of Florida), Nathalie Fernandez (Kenndey Krieger), Brooke Sprague (University of South Florida), Iser Guillermo DeLeon (University of Florida), PAIGE TALHELM (University of Florida)
Abstract: Several authors have suggested that preference assessments conducted under more stringent conditions that approximate the target clinical context may make better predictions about the relative effectiveness of reinforcers than those conducted under low-effort conditions. However, preference assessments conventionally involve providing access to stimuli contingent on low-effort selection responses such as reaching or pointing. As a first step towards addressing this question, we endeavored to determine whether preferences assessment outcomes differed under low- and high-effort conditions with four individuals with autism ages 3-19. Specifically, we compared the outcomes of paired-stimulus and multiple-stimulus without replacement preference assessments under low-effort and higher-effort conditions. In the lower-effort condition, we conducted “standard” assessments requiring only a selection response. In the higher-effort condition, participants made selections only after completing tasks on a fixed-ratio (FR) schedule similar to that used in their typical clinical programming. Our analysis of changes in stimulus ranks indicated that increasing response requirements did not generally result in systematic and reliable shifts in preference hierarchies.
 

Effects of Session-End Criteria on Break Points and Problem Behavior During Progressive Ratio Assessments

Franchesca Izquierdo (University of Miami), YANERYS LEON (University of Miami), Kamila Garcia Garcia Marchante (University of Miami)
Abstract:

Basic research has shown that session-end criteria can influence break points obtained for pigeons responding on progressive-ratio schedules. Although applied researchers have used progressive ratio schedules to assess reinforcing efficacy of stimuli in clinical populations, there remains a dearth of evidence on optimal parameters (i.e., step-size, session-end criteria) of progressive ratio schedules in this context. The purpose of this study was to evaluate the extent to which session-end criteria impact breakpoints and problem behavior of 5 children with IDD responding on progressive ratio schedules. We retroactively examined data obtained in Leon et al. (2020) and applied the following session-end criteria to second-by-second data streams: 1-min, 2-min, and 3-min of no target response. Breakpoints were nearly identical in the 2- and 3-min criteria sessions for all 5 participants; whereas, breakpoints were lower for 3 of 5 participants in the 1-min criteria condition. Additionally, we observed a parametric effect on the occurrence of problem behavior as the session-end criteria increased, (i.e., more problem behavior in 3-min relative to 2-min and more problem behavior in 2-min relative to 1-min).

 
An Evaluation of an Electronic Picture-Based Multiple-Stimulus-Without-Replacement Preference Assessment
KATHRYN A GORYCKI (The University of Kansas), Breanna R Roberts (University of Kansas), Ashley Romero (University of Kansas), Pamela L. Neidert (The University of Kansas)
Abstract: Reinforcers are critical for skill acquisition and behavior reduction for children with and without intellectual and developmental disorders (IDD), and identification of potential reinforcers via direct stimulus preference assessment (SPA) is a routine part of early childhood education and intervention. Alternate stimulus modalities (e.g., pictorial, verbal, video) have been evaluated in an attempt to decrease administration time and allow assessment of protracted events and events difficult to present during the assessment (Heinicke, 2019). Some studies have shown correspondence by alternative-modality SPAs. However, many of the studies provided access to the actual stimuli contingent upon selection, which limits the potential advantage of decreased assessment time. Further, few studies have examined electronic pictorial stimuli as the presentation stimuli. The purpose of this study is to evaluate the reliability and predictive validity of using electronic-picture stimuli during multiple-stimulus-without-replacement (MSWO) preference assessments (without contingent access for selection). Specifically, we conducted numerous, daily session blocks for each participant that consisted of three-pairs (i.e., three rounds) of electronic-picture versus actual-item MSWOs followed by a reinforcer assessment of the highest preferred stimuli identified by both preference assessments. The study will be conducted with at least 6 children. To date, three young children with no known diagnoses have participated. Preliminary results show relatively high reliability of the e-pic MSWO for only 1 of 3 children; however, predictive validity was relatively low for all children.
 
Evaluating a Briefer Version of Paired-Stimulus-Preference Assessments
MARY KATHERINE CAREY (Glenwood, Inc), Renea Rose (Glenwood, Inc)
Abstract: The utility of paired-stimulus-preference assessments for identifying an array of potentially reinforcing stimuli is well documented in applied-behavior-analytic literature. However, guidance as to how many trials are necessary to conduct of the assessment to obtain a reliable rank-order of stimuli has not yet been provided to practitioners. Thus, the current study determined whether conducting 50% of trials of a paired-stimulus preference assessment yielded the same results in terms of rank-order of stimuli and percent selection of stimuli. Additionally, Spearman’s correlation coefficients were calculated to demonstrate the mean correlation between rank-order of stimuli of the partial assessment to the full assessment. A post-hoc analysis of 30 archival paired-stimulus data sets gathered from a center for individuals with autism was conducted. Results thus far show that the mean correlation coefficients exceeded a critical r value of 0.60 for every data set analyzed. Therefore, practitioners may gain the same information from conducting half the number of trials of a paired-stimulus-preference assessment as conducting all trials.
 

BACK TO THE TOP

 

Back to Top
ValidatorError
  
Modifed by Eddie Soh
DONATE
{"isActive":true,"interval":86400000,"timeout":20000,"url":"https://saba.abainternational.org/giving-day/","saba_donor_banner_html":"Your donation can make a big impact on behavior analysis! Join us on Giving Day.","donate_now_text":"Donate Now"}