It's IT's worst nightmare.Whether the call comes in the middle of the night or the height of the afternoon, the consequences are the same: all systems are down."It's a message we all dread, and most are not prepared for," said Alice Lee, vice
It's IT's worst nightmare.
Whether the call comes in the middle of the night or the height of the afternoon, the consequences are the same: all systems are down.
"It's a message we all dread, and most are not prepared for," said Alice Lee, vice president of I.S. Clinical Systems at Boston's Beth Israel Deaconess Medical Center.
She told a HIMSS disaster recovery audience about an incident that occurred last November, when BIDMC - one of the nation's most wired hospitals - suddenly began losing its electronic clinical, administrative, and financial tools.
Early on a Wednesday afternoon, her support center began receiving complaint calls from clinicians. Response was slow. Applications were at times unavailable. Feet shifted on mahogany row.
Access to applications was restored late that night, so everyone slept well.
Then, Thursday morning as users logged on, the network again began to experience intermittent outtages, a cycle repeated all day. No one slept well that night.
By Friday, with continued degraded performance, the hospital's emergency preparedness command center was activated. At 4 p.m., the chief operating officer ordered a system-wide shutdown.
Operationally, an internal disaster was officially called, and all clinical areas were forced to perform digital heresy: They moved to paper.
To ensure no user tried the network during debugging and testing, a "hard down" was enforced from Friday through Sunday. Departments implemented their own downtime plans. Worst-case scenarios were contemplated.
More than 120 hospital applications, including PACS, were no longer online. Physicians reverted to paper orders, $3 million in daily patient billing ceased, and no one could access e-mail.
"We were forced to use film rather than digital images in radiology, and drug interaction checking had to be done manually, to name just two inconveniences," Lee said.
Other problems abounded. Runners - including the hospital CEO - delivered specimens, tests, and supplies. Pathologists phoned lab results, and a manual census was updated every few hours. At one point, the COO dashed to the store to buy copier paper.
"The whole experience was like stepping back 20 years," Lee said.
Following a tense weekend, system stability was achieved, and on Monday the COO issued an all clear. Total downtime: five days.
"Although care had been delayed, review indicated clinical outcomes were not affected by the outage," Lee said.
She offered some fresh advice:
? Be sure to plan for extended outages.
? Be ready to accommodate staff overnight.
? Designate a dedicated IT liaison to communicate with hospital leadership.
? Keep disaster plans and contact lists current.
? Prepare for media coverage.
"We actually called the media ourselves to let them know what was happening," Lee said.
Oh, yes, and be sure to keep extra copier paper handy.