Lessons Learned from Hurricane Sandy
A New Perspective on Testing

Lessons Learned from Hurricane Sandy

For disaster recovery (DR) service specialists, disasters like Hurricane Sandy, while devastating to those affected, serve as real-life research laboratories. Disasters determine just how capable their hardware and software, networks and disaster preparedness plans are for keeping vital information and applications up and running.

In a Q&A session, Dominick Paul, National Vice President, Strategic Solutions, at SunGard Availability Services offered insights into lessons learned and business practices gleaned from last October’s disastrous hurricane, which left an estimated $65.6 billion in losses from damages and business interruptions in its wake.

Question: What were the three major recovery challenges your customers faced during Hurricane Sandy?

Dominick Paul: First, let me say that with every major disaster I’ve encountered, one thing stays the same; DR isn’t treated with the appropriate level of attention in most organizations. Quite like insurance, it’s seen as an expense until it’s needed. It’s viewed as something that doesn’t occur often and that consumes time and resources that could be better spent differentiating the business.

With that backdrop, the major challenges our customers encountered involved:

  1. The failure to integrate their people into the recovery. IT departments are designed to be static; they manage under a tight regimen and with as few people as possible. A disaster blows that system up. Even thinking IT staff can work from home proves a problem when, at home, they often have no power and can’t get anywhere because their usual routes are out of service or their vehicles have run out of gas.
  2. The realization they haven’t updated their DR plan in ages and a serious gap appears between what they need and what they have, often with drastic implications. They then often scurry to see what their DR provider can do to help, but by then it may be too late to solve every issue and still meet recovery time objectives.
  3. The discovery that, if they used backup tapes, not all their vital data could be recovered in a timely manner. Tapes need to be shipped to the recovery site and that takes time. They also need to be staged and played back and that requires time. In addition, the information they store isn’t usually backed up to the point when the disaster strikes, so they can lose that important data, which might be worth millions of dollars. If that lost information involves critical transactions, can they be retrieved or reprocessed?

Q: What were the root causes of those challenges?

DP: Inadequate preparation and not thinking through all the likely scenarios that could occur with a severe storm. One company I knew had a data center in Manhattan on a secure floor so its servers weren’t affected, but the building’s power went out and the generators kicked in. And, since that part of Manhattan was flooded, its IT staff couldn’t get there. Many companies failed to realize that their IT staff just couldn’t get to the office because their routes were flooded or blocked, or their vehicles were out of gas and the lines at gas stations were blocks long and the gas was limited. We planned for that and stationed a gas tanker at our Carlstadt, N.J., center to serve employees who worked from the center.

Q: What were some of the surprises that customers didn't see coming during Hurricane Sandy?

DP: Some customers that relied on tape and hadn’t built a fully duplicated and active DR system found they couldn’t retrieve all their data. Others found that their own system configurations had changed but they hadn’t updated their DR plan for a long time so it was out of date. And many discovered their IT staff working from home didn’t have power and couldn’t leave their homes or find a passable route to the office.

Q: What were the most common best practices that you recommended your customers implement post-Hurricane Sandy?

DP: We always urge our customers to alert us soon after they realize a possibly destructive storm is heading their way. This way, we can prepare their contracts and help update their DR plans in the time before the storm. We can help determine where their people can go and conduct briefings so they can better understand their situation. This is better than simply relying on a hope and a prayer. We also recommend that they don’t fall into the “Chicken Little” trap and overextend by suddenly wanting to spend millions to do everything. Rather, we recommend they get their business better aligned against risks and consider partnering with an outside expert who can handle the dirty laundry.

DR isn’t just about the technology and the people. It’s a lifecycle, and without following a regimen and a process and integrating that into your DNA, things can foul up. We recommend our customers build something very tactical and with the minimum standards they require and establish a road map. Then they can continually improve on it over one or two years, depending on their maturity.

Q: How can organizations better manage disaster recovery in the future, given that the number of disasters seems to be on the rise?

DP: Obviously, we urge them not to do it alone but to rely on an outside data center with deep and broad experience that can be leveraged to handle any possible disaster. Companies should look for a provider with facilities that would appear to withstand what any disaster could generate, and with a sound plan for keeping your systems up and running.

If they’re considering teaming with a managed hosting provider, they should find out several things to ensure a provider is ready itself for the next super storm. Does it have a DR plan? What is its internal resource and staffing plan for such a disaster? Is its network architecture resilient? How is power allocated to the facility? What is the level of network diversity? Does it have the ability to scale up additional capacity via Cloud-based hosting services, if needed?

We remind organizations that DR is a complex regimen and the only way to start is to just begin. DR takes time, is difficult to do and isn’t strategic in terms of driving new revenue. It’s a lot like landscaping. It’s a dirty process and it takes away from the more important things in life. So it’s the perfect chore to outsource so you can focus on what’s more important to your business.


About the Author

Bill Bedsole is a Speaker, Author and Consultant. Founder and President of the William Travis Group and author of the NextGen 360° Advanced BC methodology, he has more than 25 years working in the Disaster Recovery and Business Continuity industry. He can be reached at Tel: 847-303-0055, Fax: 847-303-0378.