|
Designing a Progress Monitoring System to Improve Decision Making With Morningside’s Generative Instruction Model |
Saturday, May 29, 2021 |
5:00 PM–6:50 PM |
Online |
Area: EDC/OBM; Domain: Translational |
Chair: Carl V. Binder (The Performance Thinking Network, LLC) |
Discussant: Carl V. Binder (The Performance Thinking Network, LLC) |
Abstract: Many schools, learning centers, and agencies find themselves awash in learner data, much of which is never analyzed or used to make immediate decisions about a learner’s program. The Morningside Model of Generative Instruction encourages efficient data collection, by emphasizing that the primary role of data should be to encourage timely and effective decision making. This symposium will describe how MMGI designers carefully select data to be collected, and how those data are organized to optimize decision making by learners, teachers, and school administrators. First, Austin Seabert, a consultant at The Performance Thinking Network, will describe the design and implementation of a revised progress monitoring system to facilitate quicker and more effective decisions by classroom teachers. Second, Morningside Academy’s School Psychologist and Vice-Principal, Julian Gire, will describe how MMGI’s placement testing and new progress monitoring assessment system were subsequently modified due to the COVID-19 pandemic. Next, Morningside Academy’s Associate Director, Andrew Kieta, will detail the development of a teacher coaching system informed by the data collected and analyzed via the new, modified progress monitoring system described in the first two talks. Lastly, Morningside Academy faculty member Bailee Scheuffele will describe the development of a new Curriculum Based Assessment (CBA) within the new progress monitoring system to assess emerging sentence-writing repertoires in order to make more-informed curriculum decisions. |
Instruction Level: Intermediate |
Keyword(s): Assessment, Data-based, Decision-Making, Precision Teaching |
|
Designing a Progress Monitoring System to Improve Teacher Decision Making With Morningside’s Generative Instruction Model |
(Service Delivery) |
AUSTIN SEABERT (The Performance Thinking Network), Julian Gire (Morningside Academy), Andrew Robert Kieta (Morningside Academy) |
Abstract: Morningside Model of Generative Instruction features a multi-tiered assessment system. At the Micro level, Morningside teachers use Precision Teaching to collect daily measurements on several academic pinpoints. The Meta level consists of placement tests and progress monitoring tests to validate data at the Micro level, diagnose potential obstacles to desired growth, and predict performance on end of the year tests. Those end-of-the-year assessments make up the Macro level, where standardized, norm-referenced tests are used to evaluate student growth across an entire school year. Implementing this system requires timely assessment administration, clear communication of results to all relevant individuals, and most importantly, effective instructional decision making based on assessment data. This has proven particularly challenging at the Meta level, prompting a one year revision project. This presentation will describe a process improvement methodology involved with the creation of a new system, including: Defining the assessment problem, outlining features and capabilities of an ideal assessment system, identifying resource limitations, system design, testing and rollout, and feedback. Data will be presented that show how and why redesign decisions were made as well as their effect in improving MMGI’s assessment system. |
|
Assessment Systems and COVID-19: Rapidly Adapting Morningside’s Measurement Tools to Ensure Effective Decision Making |
(Service Delivery) |
JULIAN GIRE (Morningside Academy), Andrew Robert Kieta (Morningside Academy) |
Abstract: The onset of the COVID-19 pandemic forced schools to figure out how to adequately serve their students in the on-line environment. While the disruption caused to instruction has been well documented, the challenges of remote assessment have received less attention. As the pandemic continues, schools must not only work to maintain student’s academic achievement but also to further it. However, how can student progress be assessed in the online learning environment? The Morningside Model of Generative Instruction details a multi-tiered system of assessment, which is made up of dozens of assessments across different subject matters, and which guide the placement of students and ongoing progress monitoring efforts. The most pressing problem is that all of these assessments and their associated protocols were designed to be administered in person, not online. This talk will outline the process of designing, implementing, and revising Morningside’s placement testing process and recently redesigned progress monitoring system to ensure the collection of the best data possible. The speaker will present lessons learned, as well as future use of the online assessments once we are back to in person school. |
|
The Classroom as the Unit of Analysis: Using Morningside’s Progress Monitoring to Inform Coaching Decisions |
(Service Delivery) |
ANDREW ROBERT KIETA (Morningside Academy), Julian Gire (Morningside Academy) |
Abstract: Active coaching of classroom teachers by instructional experts is a fundamental component of The Morningside Model of Generative Instruction. Implementing a robust coaching model in a school setting can be challenging, as coaches rarely have enough time to dedicate to all teachers. Best practices indicate that coaching priorities should be controlled by both student and teacher performance data. Morningside coaches have long been responsive to teacher performance data, which is collected and analyzed via a series of scorecards that collect a mix of direct behavioral measures and indirect rubric measures. However, coaches have typically been too responsive to certain types of student data, specifically micro-level data, or daily measurements of specific pinpoints collected on Standard Celeration Charts. Analyzing behavior at the level of the individual student results in more frequent 1:1 interventions, which can decrease teacher efficiency and shift the unit of analysis from the classroom to the individual student. This presentation will describe how Morningside coaches examine data produced by the recently redesigned progress monitoring system to make better coaching decisions, allocate time more effectively, and keep the classroom entrenched as the unit of analysis. |
|
Morningside Model of Generative Instruction’s Multi-Tiered Assessment: Adding and Expanding Progress Monitoring of Written Expression |
(Service Delivery) |
BAILEE SCHEUFFELE (Morningside Academy), Julian Gire (Morningside Academy) |
Abstract: As part of the Morningside Model of Generative Instruction, Morningside Academy utilizes a multi-level model of assessment - Macro, Placement, Meta, and Micro. The meta level assessment offers an important opportunity for teachers to evaluate student achievement, both over time and against the curricula. In the area of written expression, progress is assessed at the meta level through a Curriculum Based Assessment (CBA) in which the learner is given a set time to plan and then write a genre-specific paragraph from a visual prompt. But what about entry level students who are still learning at the sentence writing level? This presentation will examine the utility of a refined measurement modality that is more sensitive to the instruction of the targeted component skills. Current data shows that while data from the paragraph writing CBA is highly variable throughout the assessment period, rubric scores measuring proficiency in correct sentence structure increase over that same period, and variance between total words written (TWW) and correct writing sequence (CWS) shrinks. |
|
|