IT should be notified now!

Association for Behavior Analysis International

The Association for Behavior Analysis International® (ABAI) is a nonprofit membership organization with the mission to contribute to the well-being of society by developing, enhancing, and supporting the growth and vitality of the science of behavior analysis through research, education, and practice.

Donate to SABA Capital Campaign
Portal Access Behavior Analysis Training Directory Contact the Hotline View Frequently Asked Question
ABAI Facebook Page Follow us on Twitter LinkedIn LinkedIn

Ninth International Conference; Paris, France; 2017

Event Details

Previous Page


Symposium #124
CANCELED: Improving Behavioral Analysis Practices With Measures
Wednesday, November 15, 2017
5:30 PM–6:20 PM
Scene C, Niveau 0
Chair: Anne-Josee Piazza (Université du Québec à Montréal)
Abstract: Building a science of behavior implies the development of quantitative measures to analyze behavior. Measures lead to a better account of the relation between behavior and the environment, show learning through time, and support visual inspection with statistical inferences. Measurement also leads itself to evaluation. How good is it? Does it represent the phenomena? Is it biased? Measures are the key component of science. The purpose of the current symposium is to present different kinds of measures to improve behavior analysis in experimental and clinical practices. The first presentation uses the Electronic Modified Standard Daily Chart (precision teaching tool) to evaluate the efficiency of a program for a quantitative analysis course. The second presentation shows a new measure of interobserver agreement to improve actual tests such as the Cohen�s kappa and the percentage of agreement and to solve their respective issues. Finally, the third presentation illustrates single-case statistical analyses quantifying the relationship between behaviors and their contingencies, and modification of behavior through time.
Instruction Level: Intermediate
Keyword(s): interobserver agreement, measure, precision teaching, statistical analysis
CANCELED: Measuring Efficiency of a New Instructional Program for a Quantitative Analysis Course With Precision Teaching
(Applied Research)
ALEXANDRE GELLEN-KAMEL (Université du Québec à Montréal), Jacques Forget (Université du Québec à Montréal), Pier-Olivier Caron (Université du Québec à Montréal), Normand Giroux (Clinique Autisme & Asperger de Montréal), Gisela Regli (QcABA Canada; Cocon Développement), Richard Frenette (Cocon Développement)
Abstract: Precision teaching (PT; West & Young, 1992) is a daily, systematic, and behavior-oriented method of evaluating tactics and curricula that can be combined with any instructional method (Lindsley, 1991). It provides reliable data for practitioners, teachers or researchers to guide the intervention and take evidences-based decisions (Cocon Developpement, 2010). It also measures whether an instructional method is efficient or not (West & Young, 1992). The purpose of this communication is to present a new pedagogical program based on direct instruction (d.i.) and measure its efficiency with PT. The program consists of 33 series of questions designed for undergraduate students in a quantitative analysis course in psychology. On average, each series are composed of fifty-five quantitative analysis exercises. These series of exercises are based on notions and vocabulary from the mandatory statistical analysis manual by Christensen (1986). The program is scheduled three times per week during eleven weeks. Individual performances are measured in responses by minute and analyzed with the Electronic Modified Standard Daily Chart (EMSDC; Cocon D�veloppement, 2016). It automatically generates results on a PT chart and indicates statistics (performance, accuracy, celebration, global improvement index) measuring the efficiency of the instruction program (Giroux, s.d.; Legault, 2012).
CANCELED: Toward an Interobserver Agreement Measure Taking into Account Percentage Agreement, Randomness, and Disagreement
(Applied Research)
PIER-OLIVIER CARON (Université du Québec à Montréal)
Abstract: Interobserver agreement is a measure assessing the degree to which judges, raters, or observers classify objects in the same way. It is useful in behavior analysis as a measure of confidence. If observers do not agree on which behavior has occurred then no science of behavior can be built. The current presentation proposes a new method to compute interobserver agreement. First, pitfalls of current measures such as percentage agreement, which does not consider chance, Cohen's kappa, which can be significant with very low agreements (and both which neglect disagreements) are discussed. Then, the new method is introduced, which solves all the previous issues. Briefly, it weights expected frequencies of the chi-squared test by a differential factor of the expected (target) agreement. It aims to detect when judges disagree more than chance. Monte Carlo simulations are carried out by varying agreement and systematic disagreement. Simulations show that the new test can detect 74% of cases containing systematic disagreements (observers do not agree enough) whereas Cohen�s kappa would have approved 84% of the cases as being good enough. The implications of the new test to observation trainings, its current limits, and comparison to other measures of interobserver agreement are discussed.
CANCELED: Two Single Case Statistical Procedures to Quanitfy the Relationship Between Behaviors and Contingency and Change in Behavior Over Time in Human Subject
(Applied Research)
PHILIPPE VALOIS (Université du Québec à Montréal), Pier-Olivier Caron (Université du Québec à Montréal), Jacques Forget (Université du Québec à Montréal)
Abstract: Observation and quantification of human behaviors and their relationship with the environment is a common task of behavior analysts. They commonly use observational grids to measure the frequency of specific behaviors and visual inspection to assess behavioral change over time. The presentation pleads in favor of using statistical procedures to analyze behaviors. It aims to illustrate two statistical procedures that can be used with direct observation and functional analysis in natural contexts. They enhance the evaluation of the contingency of reinforcement and behavioral progress over time. First, the chi-squared test and correspondence analysis quantify the relation between subjects behaviors and its consequences or stimuli that can be included in conjunction with an observational grid. The procedure shows functional relationships within the contingency of reinforcement without experimentally manipulating the situation. Second, the Jacobson & Truax (1991) method to assess clinical significance change can be applied with the observation of the effect of a treatment over time. The procedure enhances analyses based on visual inspection by measuring the probability of significant change. Both procedures are illustrated using two case studies of children receiving interventions for their low rate of attention-to-the-task. Potential pitfalls and benefits of both procedures are discussed.


Modifed by Eddie Soh