|
The Complexities of Ethical Decision Making |
Sunday, May 28, 2023 |
5:00 PM–6:50 PM |
Convention Center Mile High Ballroom 1C/D |
Area: DDA/AUT; Domain: Translational |
Chair: Stephanie M. Peterson (Western Michigan University) |
Discussant: R. Wayne Fuqua (Western Michigan University) |
CE Instructor: R. Wayne Fuqua, Ph.D. |
Abstract: Decision making in clinical applications of behavior analysis is complex. This symposium will consist of three of the papers describing applied research and one conceptual paper. The first presentation will describe a study in which the the component skills involved in ethical decision were taught to students of behavior analysis. The steps students would implement given scenarios were compared to each other and to experts. The results of this study will be discussed in terms of the instructional considerations for those teaching students to engage in ethical decisions. The second presentation is another applied study, in which the decisions of novice and expert behavior analysts were compared. Experts and novices were asked to rate risk in conducting a functional analysis with and without a structured decision-making tool. The results of this study showed that both experts and novices benefited from use of the tool. The third presentation will describe an applied study of the underlying behavioral processes involved in clinical decision making. Researchers manipulated the televisibility and short-term harm to the clients of decisions. A loss-discounting framework was used to analyze the variability in responding of participants. The final presentation will challenge the audience to consider ethical decision making in a new context—that of artificial intelligence. This paper will address the question, what are the ethical considerations when designing artificial intelligence technologies? Finally, the discussant will summarize the common themes of complex, ethical decision making and implications for the field. |
Instruction Level: Intermediate |
Keyword(s): decision making, ethics |
Target Audience: Prerequisite skills are certification as a beahvior analyst, detailed knowledge of the BACB's Code of Ethics, a detailed understanding of a variety of assessments and treatments commonly used in practice, as well as common ethical challenges related to the implementation of these assessments and treatments. |
Learning Objectives: At the conclusion of the presentation, participants will be able to: 1. Use a rubric/decision tree to improve the quality of recommended actions for ethical scenarios, 2. State factors that influence ethical decision-making and the difficulty of measuring risk and decision-making processes, 3. Describe how the probability of their clinical choices leading to harm or being observed by others influences the decisions they make, and 4. State open-ended questions the field should answer as AI begins to be used more frequently in ABA. |
|
Using a Decision Tree to Evaluate Contextual Factors in Ethical Scenarios |
(Applied Research) |
VIDESHA MARYA (Endicott College; Village Autism Center), Mary Jane Weiss (Endicott College) |
Abstract: The successful navigation of ethical dilemmas is an important skill set for practitioners of behavior analysis. Component skills include detection of the dilemma through ethical radar, the consideration of core principles, the identification of relevant codes, and the consideration of relevant contextual factors. Implementation and follow up skills require the analysis of effectiveness, the need for additional action, and the inclusion of preventative strategies for the future. In the instruction of the skill set, systematic ways are needed to teach these components. Specifically, students need to learn to analyze contextual factors and to methodically navigate a wide variety of potential circumstances. In this study, students of behavior analysis were taught ethical navigation skills using the Behavior Analyst Certification Board ethical decision-making model or using a worksheet in addition to the Behavior Analyst Certification Board decision making model. Their responses regarding actions to take were compared to one another and to experts in ethical conduct. Implications for instruction of this skill set are reviewed, as well as issues in the generalization and social validity of instructional procedures and outcomes. |
|
Expert and Novice Use of the Functional Analysis Risk Assessment Decision Tool |
(Applied Research) |
ALI SCHROEDER (Western Michigan University), Stephanie M. Peterson (Western Michigan University) |
Abstract: Risk assessment and evaluation before behavioral assessment and intervention is required by the Behavior Analyst Certification Board® (BACB®) Ethics Code for Behavior Analysts (BACB, 2020). Methods to do so and potential factors to consider are not readily available. Deochand et al. (2020) developed the Functional Analysis Risk Assessment Decision Tool (FARADT) to aid behavior analysts in ethical decision-making regarding whether to conduct a functional analysis. An empirical evaluation of whether use of the FARADT impacts novice users’ ratings of risk has not yet been conducted. The research discussed in this presentation evaluated expert and novice behavior analysts’ ratings of risk with and without access to the FARADT when given scenarios in which a functional analysis was being considered. Results indicated FARADT decreased variability of risk ratings for novices and produced ratings of risk that more closely matched the intended risk level of the vignette for both experts and novices. Results provide preliminary evidence that decision-making tools may be helpful to both novice and expert behavior analysts and provide insight into the complex variables considered during risk assessment and decision-making. |
|
Influence of Televisibility and Harm Probability on Clinical-Ethical Decision-Making |
(Applied Research) |
ALAN KINSELLA (Endicott College), David J. Cox (RethinkFirst; Endicott College), Asim Javed (Endicott College) |
Abstract: Researchers have recently begun to use a behavioral economics framework to study the clinical-ethical decisions made by practicing behavior analysts. Much of this work, however, has examined broad patterns as opposed to isolating the underlying behavioral processes. In this study, we sought to extend past research by studying how clinical-ethical decisions would be influenced by a parametric manipulation of the probability that each available option would be televisible or cause short-term harm to the client. Behavior analysts ( n =15) were largely influenced only by the probability of short-term harm. In contrast, the control group ( n =30) was influenced by the probability each choice was televisible and the probability of short-term harm. Further, across all choices, control group participants showed a higher tendency than behavior analysts to not allow the individual to engage in the harmful behavior. Quantitative models built using machine learning algorithms were able to predict ~75% of choices made by participants using only the independent variables manipulated in this study. At the individual level, a probability loss discounting framework seemed to account for the data; however, deviations from traditional probability loss discounting methods provide many areas for future research. In total, the present experiment highlights the potential behavioral processes involved in clinical-ethical choices, similarities between individual and group-level responding, and areas where practicing behavior analysts may have preferences that differ from their clients or their clients’ caregivers. |
|
The Ethical Use of Artificial Intelligence in Applied Behavior Analysis: Some Data & Conversation Starters |
(Service Delivery) |
ADRIENNE JENNINGS (Daemen University), David J. Cox (RethinkFirst; Endicott College) |
Abstract: Artificial intelligence (AI) is increasingly a part of our everyday lives. Though much AI work in healthcare has been outside of applied behavior analysis (ABA), researchers within ABA have begun to demonstrate many different ways that AI might improve the delivery of ABA services. However, though AI offers many exciting advances, absent from the literature is conversation around the ethical considerations when developing, building, and deploying AI technologies. Further, though AI is already in the process of coming to ABA, it’s unknown the extent to which behavior analytic practitioners are familiar (and comfortable) with the use of AI in ABA. The purpose of this presentation is threefold. First, to describe how AI fits with existing ethical publications (e.g., BACB Code of Ethics) and where our ethical literature is silent. Second, to discuss considerations that can inform ethical guidelines and decision-aids for developing, and using, AI in ABA service delivery. Lastly, to present data around current perceptions and comfortability with the use of AI in ABA. In total, we hope this presentation sparks proactive dialog around the guidelines for the ethical use of AI in ABA before the field is required to have a reactionary conversation. |
|
|