IMSH Delivers Sessions - Scroll down to "ADD to your Briefcase"
(Research Abstract Professor Rounds: Group 1) Program Evaluation for Continuous Improvement: Lessons from New Simulation Program (1090-004247)
Start time: Tuesday, January 26, 2021, 2:00 PM End time: Tuesday, January 26, 2021, 3:00 PM Session Type: Research Abstracts (Completed Studies) Cost: $0.00
Content Category: Researcher
Hypothesis:
The purpose of this study was to evaluate a new and developing simulation program for the process of continuous improvement. Program evaluation is integral to improvement. It is difficult to embed evaluation into nascent activities, though they often need the most targeted development.
Methods:
As a new medical simulation program was developed, a program evaluation instrument was designed to capture real-time feedback from both learners and facilitators. After a paper-based pilot, a single evaluation for all participants was designed in Qualtrics using branch logic. The evaluation was designed to identify immediate needs and inform future curriculum design.
Results:
The web-based program evaluation instrument was implemented in January 2019 with over 1100 responses recorded. Learners reported excellent or good experiences (95%), level of instruction (97%), and value (90%). Most learners (93%) felt their activity should be offered in the curriculum again. Technical issues were identified by 28% of learners. Faculty reported consistently high levels of learner engagement (99%). Faculty expectations were met for equipment functionality (88%), equipment availability (86%), and room setup (88%). While 35% of faculty identified technical issues, only 15% reported that the issue affected the case.
Conclusions:
Embedding an inclusive program evaluation early in the development of a new simulation program allowed for the resolution of immediate needs, responsive improvements, and accurate reporting. As the program continues to grow, the evaluation data will be used in curriculum design, and additional data will be collected. Lessons were learned from the development of the evaluation, including a consideration of a wider range of stakeholders, measurement of additional learner characteristics, and activity differentiation. Developing and refining a program evaluation for multiple users and uses of simulation is feasible. Continuous improvement is necessary for an evaluation in addition to the program itself.