dilbert-our-disaster-recovery-planAs the name of the event denoted, the Hospital Cloud Forum put on by the Information Management Network (IMN) was primarily focused on the benefits of the cloud and cloud-based systems and services for their healthcare organizations and providers. While many disagree about what is truly cloud — does it or does it not include application service provider (ASP) models — they all agree that a system going offline requires practices and their providers to have a plan in place for dealing with such situations.
During an afternoon panel focusing on the use of cloud-based services to support pay-for-performance, Steven Waldren, Senior Health Care IT Strategist for the American Academy of Family Physicians (AAFP) and Co-founder of New Health Networks, emphasized that the gap in network reliability (i.e., uptime and downtime) between client-side and hosted solutions was narrowing:
Definitely, since we started doing the ASP model, there weren’t a lot of options for our rural docs. That’s definitely improved as the amount of connectivity has improved. Typically, we ask them to ask their colleagues who have client servers, “How many times did their server go down? How many times did you do an upgrade of your server and it wasn’t available? How many times did a hard disk go down?”
Despite increased network reliability, no cloud-based system provides 100% uptime. Having worked with family physicians to adopt cloud-based EHR systems for AAFP, Waldren has identified no less than recommendations for providers using these systems:
No less than two connections: “What we usually see is that you need at least two forms of connectivity. We prefer them to be very different,” advises Waldren.
Local cache of recent care information: Although cloud-based EHR solutions are hosted externally, they need to store at least some portion of recent health information so that providers can carry on with their patient care:
The other thing is to look at what we used to call a headless server. In essence what it was is could you get an abstract of the data that you need to deliver the majority of your care on your patient? This is the CCR/CCD data. Can you have a local cache that gets synced with the cloud such that if it does go down, you still have their problem list or med list, or the last 30 days’ worth of labs, so that you can at least take care of what’s going on?
Ability to view today’s activities, information: At the very least, providers need information for the most important patients — that is, those they are set to see that day:
The other is you have the ability to look at your processes for that day. In the morning, you have a list of patients who are probably going to come in unless you’re a full, open-access shop . . . You can at least pull the data down for those patients so that you have a copy of their records locally stored in case the server goes down.
Development of a contingency: As required by HIPAA, covered entities such a physicians need to have a backup plan in place. According to Waldren, it’s what’s most important as well as when it should be developed, before unanticipated downtime:
But the biggest thing that we talk with our members about is having a contingency plan. What happens for small providers is if they haven’t really thought about and all of a sudden it goes down, it’s a disaster. They don’t know: What am I supposed to do? Who’s supposed to do what? What do I do with the data? What do I do to get it back up? We have them work through that particular process of figuring out what’s going to be your contingency plan when the data goes down.
Considering that no system is perfect and that internal networks can also fail, these are points no matter the EHR system a provider is using or considering adopting.

Leave a Reply

Please sing in to post your comment or singup if you don't have account.