IMSH Delivers Sessions - Scroll down to "ADD to your Briefcase"
Using Digital Biomarkers to Measure Fluctuations in Instructors’ Cognitive Load Between High-Fidelity Simulations and Debriefing Sessions (1090-003732) (Research Abstract Professor Rounds: Group 6)
Start time: Friday, January 29, 2021, 11:30 AM End time: Friday, January 29, 2021, 12:30 PM Session Type: Research Abstracts (Completed Studies) Cost: $0.00
Content Category: Researcher
Hypothesis:
Facilitating simulation training is a complex activity that requires trained instructors to perform multiple tasks throughout the session.(1) These activities generate substantial cognitive load to instructors, and if the mental demand exceeds an instructor’s cognitive capacity, it may negatively impact performance during scenario observation and subsequent debriefing. Although previous research has highlighted the critical role that instructor’s cognitive load plays in conducting simulation-based training, to date, no study has investigated objective measures of instructor cognitive load.(1) The aim of this study was to analyze cognitive load fluctuations among simulation instructors during high-fidelity interprofessional training sessions.
Methods:
Data were collected during a team training program involving residents, nurses, and physician assistants. Each 2-hour session was composed of 5 phases: prebrief, scenario 1, debriefing 1, scenario 2, and debriefing 2. Each scenario comprised one of six possible emergency conditions: ventricular fibrillation, pulseless electrical activity, hyperkalemia, tension pneumothorax, opioid overdose, and hemorrhagic shock. Instructors (subjects) were simulation fellows. Each wore a chest strap with a heart rate sensor that detected continuous electrical heart signals, calculated interbeat intervals in milliseconds (Heart Rate Variability Analysis – HRV), and transmitted data to a smartwatch via Bluetooth. The validated low frequency/ high frequency (LF/HF) ratio was used as a proxy for the cognitive load.(2) The LF/HF ratio was calculated using a 1-minute time window, allowing comparison across different phases of the program. The Friedman’s two-way analysis of variance was performed.
Results:
Five fellows debriefed 15 sessions. Eleven had 1 debriefer and 4 had 2 debriefers (co-debriefing), totaling 19 measures over the 5 phases of the session. The LF/HF ratio, expressed as median (1stIQ-3rdIQ), in each phase were: prebrief = 3.7 (2.8-6.1); scenario 1 = 4.5 (2.8-6.1); debriefing 1 = 3.5 (2.6-4.9); scenario 2 = 4.1 (3.4-5.2); debriefing 2 = 3.0 (2.2-4.4). There was a statistically significant relationship between the simulation phase and LF/HF ratio (p = 0.001). Post-hoc pairwise comparisons showed that debriefing 2 posed the lowest LF/HF ratio compared to scenario 1 (p = 0.001) and scenario 2 (p = 0.048). Other comparisons were not statistically significant. Grouped analysis for prebrief vs scenario vs debriefing showed that instructors presented the lowest LF/HF ratio during the debriefing phase: 3.1 (2.6-4.9), compared to prebrief: 3.7 (2.8-6.1); p=0.028 and scenario: 4.3 (3.0-5.5), p=0.017. The difference between prebrief and scenario was not statistically significant.
Conclusions:
This study used HRV to investigate instructor cognitive load during different phases of simulation. The cognitive load during all phases was higher than the normal range (i.e LF/HF ratio: 1.5-2.0). We found that the cognitive load of instructors was higher during scenarios than during debriefing and prebrief, and that cognitive load tends to decrease during the second debriefing phase, although not to baseline levels. By identifying phases of simulation that pose the highest cognitive load to instructors, supporting strategies such as the use of co-debriefers or cognitive aids, can be used to avoid cognitive overload and potential negative impact on performance.