Improving the management of clinical alarms is a patient safety imperative at virtually all healthcare facilities—and a particularly challenging one at that. To better inform efforts to address this challenge, a collaborative team at the Children's Hospital of Philadelphia (Philadelphia, PA) conducted a study aimed at developing a better scientific understanding of alarm fatigue[1]—a commonly cited cause of alarm hazards. CHOP was selected as a finalist for the 2014 Health Devices Achievement Award for this innovative project.
The Challenge
The team at CHOP determined that developing effective interventions to reduce alarm fatigue required a better understanding of the magnitude of the problem. Led by pediatrician and patient safety researcher Chris Bonafide, MD, MSCE, the team sought to design a study that could help it objectively assess alarm fatigue. In particular, the team sought to (1) measure nurse response times to physiologic monitor alarms and (2) determine the rate of false alarms that nurses were experiencing. The team reasoned that with data on staff response times and false alarm rates, it could measure alarm fatigue by comparing alarm response times for nurses exposed to very few false alarms with those for nurses exposed to many false alarms.
The Solution
The team determined that a video-based approach would be the best way to acquire the data needed, in contrast to traditional observational methods in which a researcher might shadow a nurse during a shift. The use of video captured by a set of well-placed and synchronized cameras would allow a researcher to simultaneously view multiple areas of interest, specifically the monitor display (to observe waveforms and alarm indicators), the patient (to assess the patient's activity and condition at that time), and the nurse's response (to determine when it occurred and what it entailed). Furthermore, the recordings could be reviewed by multiple researchers and at different times.
Before the study could be conducted, the team first had to work through a number of challenges. These included:
Obtaining permissions and approvals. The team worked closely with the institutional review board (IRB) to assess the feasibility of the study and to secure the necessary protections. Written informed consent was obtained from each nurse and from the parent or guardian of each patient who would be video-recorded. In addition, a Certificate of Confidentiality from the National Institutes of Health (NIH) was obtained to further protect staff and families.
Bonafide remarked that, somewhat to his surprise, there were very few barriers to getting this work started. He found that the IRB was very supportive of the project: "They knew that there was a critical need to get data on false alarm rates." The nurses likewise saw the need for such data, and they were comfortable that the team had put appropriate protections in place. "The nurses needed to trust us," Bonafide explained, so the team made it clear to all involved that the videos would be used only for the purposes of the research study—nurse managers would not have access to a video to check on a nurse's performance, for example. Also, nurses could request at any time that recording be stopped or that a video be deleted if they became uncomfortable with the process.
Selecting and positioning equipment. The team had to assess and select cameras, camera-mounting options, and data storage and display devices. Selection criteria were established, and various camera and mounting options were trialed until the team identified a setup that allowed researchers to capture the views desired without interfering with patient care. From a technology selection standpoint, one of the most difficult aspects of the study "was coming up with a system that would be able to handle the amount of data that was generated," noted Miriam Zander, a research assistant who served as the team's video specialist.
Camera placement was another key consideration. The team mounted cameras such that they could simultaneously provide a wide view of the patient room, a close view of the patient (ideally an overhead view), a full view of the monitor screen and ventilator display (if in use), and a view of the caregiver when responding to alarms (including views of any windows or doors through which staff could visually assess the patient or the monitor while outside the room).
To facilitate review, video-editing software was used to display all of the individual camera views side-by-side in a single window. In addition, clinical alarm management software was used to generate a time-stamped list of alarms that occurred during the video session so that researchers could skip to periods of interest in the video.
Establishing a method for analyzing the data. To guide researchers in characterizing alarms as either true or false, the team established the following definitions for the purposes of this study: The researchers characterized an alarm as "true," or clinically relevant, if it was both valid (the patient's physiologic status was correctly identified) and actionable (clinical intervention or consultation was warranted, regardless of whether staff correctly interpreted and responded to the alarm). Otherwise, the alarm was characterized as "false," or not clinically relevant. The team then analyzed the data to determine whether there was a measurable association between (1) the number of false alarms that the nurse was exposed to over the preceding two hours and (2) the nurse's response time to subsequent alarms.
The team performed 40 video sessions for a total of 210 hours of observation. The sessions covered 20 patients on general wards where patients were monitored because of the risk of cardiovascular or respiratory deterioration, as well as 20 patients in the pediatric ICU. Analysis of 4,962 alarms yielded the following results:
Response time increased as the number of false alarms increased. That is, for cases in which a large number of false alarms had activated for a patient over the preceding two hours, the nurse's subsequent response time was found to be longer than for cases in which only a few false alarms had activated over the preceding two hours.
The team speculates that the differences in response time likely represent alarm fatigue. While noting that other variables would need to be considered and that more advanced studies would need to be conducted in order to draw firm conclusions, Bonafide nevertheless described this study as a good first step, remarking that it showed "a clear relationship between the number of false alarms and the response time."
Lessons Learned
The multidisciplinary CHOP team included clinical engineers, physicians, nurses, researchers, and administrators. Working together, this team was able to design and conduct a rigorous study to help increase the understanding of alarm fatigue and also to quantify its effects. "We saw this study as a way to get energy behind fixing the problem," noted Bonafide.
Studies such as these can help drive and inform interventions to reduce alarm burden and alarm fatigue, thereby improving system and staff performance and ultimately improving the quality of patient care. For example, Bonafide expressed the desire to bring these results into the daily safety huddles that occur in the various care areas, noting that the information can facilitate discussions about whether excessive numbers of clinically insignificant alarms are occurring and what can be done about them.
The researchers report that as far as they know, this study was the first to use nurse response time, captured using video technology, as a proxy for alarm fatigue. The team reports that the video methods it used are relatively low cost and can be easily adapted by other institutions that wish to better understand alarm fatigue and its implications for patient safety. Furthermore, as Zander observed, "the method could be easily adjusted for other patient care applications," adding that once all the kinks were worked out, the method was "fairly straightforward and simple."
For additional details about this initiative, see: Bonafide CP, Zander M, Graham CS, et al. Video methods for evaluating physiologic monitor alarms and alarm responses. Biomed Instrum Technol 2014 May-Jun;48(3):220-30.
Congratulations and thanks to lead researcher Chris Bonafide and the rest of the team at the Children's Hospital of Philadelphia for submitting this application for the Health Devices Achievement Award. Other participants from various departments included Margaret Fortino, Ron Keren, Richard Lin, Russell Localio, Vinay Nadkarni, Andrew Rich, Kathryn Roberts, Whitney Rock, Christian Sarkis Graham, Christine Paine, and Miriam Zander.
_________________________________________________________________
[1] Alarm fatigue describes circumstances in which healthcare workers become overwhelmed by, distracted by, or desensitized to the numbers of alarms that activate. It can lead to delayed responses to alarms or to alarms being missed altogether. This topic is discussed extensively in ECRI Institute's Alarm Safety Handbook, which addresses the full range of factors that can lead to alarm hazards and offers strategies for reducing the risks.