Written by Bill Bedsole   

 

The BIA’s Fundamental Flaws

The goal of a traditional BIA is sound in theory—to identify and prioritize the business’ critical processes relative to the losses that would result from their interruption, and then convince management to fund the most cost- and operationally-effective architecture to recover those processes in the aftermath of a disaster.

However, in practice, the BIA process repeatedly fails to produce the desired results and in virtually every instance we have seen, it delays and complicates the recovery planning process. We believe that there are at least six fundamental flaws in the traditional BIA process.

1.         Losses are not linear

2.         Probability is not predictable

3.         Impact must be additive

4.         The wrong solution curve

5.         The business will make up the difference

6.         The static report

Losses are not linear

Expecting business leaders to determine the financial losses associated with a single process within today’s large, distributed corporations with their hugely complicated and intertwined business environment is an exercise in futility. Business unit leaders are usually unable to accurately estimate losses associated with process interruption. But when pressed by the BIA process for a “guesstimate”, those leaders will provide an answer. In the best cases, their answers will duplicate many of the same loss elements that other departments included in their guesstimates. In the worst cases, their answers will be totally discounted by management and will cast doubt on the whole analysis. Even if we were to assume that losses could be estimated perfectly, the process would be impractical at best. Let’s say we can estimate when an interruption will irritate our best customer versus when it will cost us an order from that customer versus when it will cause us to permanently lose that customer. Let’s further assume that we can predict exactly when the loss will occur and exactly how much it will cost us…to the penny. The next step would be to estimate the loss for our second best customer and then the third best and so on. For each customer, the answer would be different based on that customer’s particular attitudes, needs, priorities, etc. And even if we estimated perfectly for each and every customer, our analysis has not begun to address: different durations of interruption; whether the supply chain has been impacted too; whether the event caused larger, regional ramifications; variable impact based on time of occurrence and a host of other variables that can dramatically change the loss profile based on the timing and nature of the event. Clearly, a different metric and analytic process is needed to define recovery priorities and a different lens needs to be used to garner management’s commitment.

Probability is not predictable

The second problem with the traditional BIA process is that even if the loss estimates are assumed to be perfectly accurate and are implicitly accepted by management, the very next question is always: “OK, if these are the losses we can expect, what is the likelihood that they will occur?” As soon as probability enters the picture, the battle has been lost. There has never been a risk analysis that has statistically justified disaster recovery planning although there have been many that have been used as an excuse for avoiding or delaying planning efforts. But even if the odds of a disaster striking were 1 in a million, the laws of probability ensure that that one chance might occur tomorrow as easily as in a million years from now. Again, a different way of pragmatically justifying recovery planning must be found!

Impact must be additive

The third flaw comes from an inability of BIA methodologies to aggregate and differentiate impact. Most methodologies prioritize processes according to a scale (1 – 5, low – medium – high, critical – important – deferrable, etc.). Regardless of the chosen scale, the problem arises when multiple impact categories are considered (legal, financial, image, health and safety, upstream, downstream, etc.) and the BIA methodology fails to provide a technique to aggregate the impact across all of the categories. For example, if the 1 – 5 scale is used, is a process with a level 5 financial impact and a level 3 legal impact more or less important than one with a level 3 financial impact, a level 3 image impact and a level 3 downstream impact? Few methodologies address this challenge and most default to labeling the process according to its highest importance (in this example a level 5 priority). This overly simple approach sacrifices granularity and as a result, it sacrifices the ability to optimally distinguish real priorities between processes at-time-of-disaster.

The business will make up the difference

The next problem comes from a fundamental flaw in thinking that the BIA process perpetuates. By focusing on business processes in an attempt to place responsibility for defining criticality and budget with the business owners (as opposed to IT), the BIA infers that “process recovery” is the goal. In fact, there are few if any critical business processes in today’s corporations which stand alone and which can be recovered or “continued” manually. In virtually all cases, the automated applications which support the critical processes must be recovered in order for the process itself to continue. This is the BIA’s fourth fundamental flaw. Business owners rarely understand the applications that they use in sufficient detail to provide the information that is necessary for disaster recovery planning. They do not understand application inter-dependencies, they do not know which physical assets their applications run on or which platforms must cross-communicate, they do not understand the network requirements for minimal connectivity and they usually do not have knowledge of the physical nature of the files that contain their data or of how those files are backed-up or rotated. The result is that the BIA process cannot be truly complete without Applications and Operations involvement, which is usually outside the original project’s scope and which usually, requires a second project once the primary BIA is completed—which in turn further delays proactive planning efforts. In order to avoid this outcome, there is a tendency to assume that through a superhuman manual effort the business will make-up for the technology shortfall. This is seldom the case and more often, well-intentioned but naive assumptions about the viability of manual alternatives will actually exacerbate the situation and complicate the recovery.

The wrong solution curve

This flaw manifests in sub-optimal and/or overly expensive recovery architectures. Most BIAs gather information based on the “Sweet Spot” premise. The “Sweet Spot” premise was formulated in the DR/BC industry’s early days and was illustrated as two opposite curves plotted against a vertical axis of solution costs and a horizontal axis of recovery time. The first curve sloped smoothly from the high left (great cost for fast recovery) to the low right (little cost for slow recovery). The second curve rose from the low left (little loss from short outages) to the high right (great cost from long outages). The “Sweet Spot” was where the two curves intersected and was to represent the optimal balance of the cost of prevention against the cost of outage. At first, this concept appears logical. However, in reality, neither the cost of prevention or the cost of outage are curves at all. They are stair cases and very uneven stair cases at that. The difference is that in the real world, as you move up the cost scale or along the time scale those moves are not smooth and gradual. They are abrupt and dramatic. For example, going from 3-day recovery to sub 24-hour recovery is not an incremental 10% or 20% cost. It is a significant jump that might represent 3, 4 or 5 times more cost. Unless the BIA methodology specifically gathers data in recognition of this concept, the necessary data will not be available for optimal solution modeling.

A static report

The final flaw may be the most critical. A BIA typically is a static report whose data reflects the business’ needs at a specific point in time. Once the report is presented to management, it is usually put on the shelf to gather dust until the next refresh…two or three years from now. But gaining management’s commitment and prioritizing recovery requirements for planning purposes is possibly the least important use for this data. Instead, this data must be kept evergreen and should be immediately available at-time–of-disaster to dynamically model recovery priorities based on the specific disaster event and the specific loss profile resulting from that event. The days of “all or nothing” plans are long past. This dictates that the BIA must be a tool not a static report—regardless of how good that report might be. The tool must contain all of the recovery objectives, needs and resources correlated by location, department and process. It must be able to interactively illustrate the unique recovery priorities and sequences based on the actual loss from the specific event. It must be able to support decision making at time of disaster by clearly illustrating increasing impact relative to time and the corresponding reduced effectiveness of manual procedures. And it must be able to facilitate dynamic reassignment of recovery resources based on current needs and priorities.

Only by understanding the true needs of the business and how applications and processes interact at the detail level can proportionate and cost-effective recovery strategies be designed.

A traditional BIA is too often an artificial project that is intended simply to convince management to invest in business continuity planning by painting a picture of abstract risk and losses. The BIA needs to be redesigned to reveal more detailed process and application information that is mandatory to craft the most cost-effective and workable final solution. It must produce data that will enable the design of much more finely tuned recovery strategies which in turn will offer much better recoverability at a much lower price point…strategies that will take maximum advantage of existing resources and infrastructure and support the business process requirements in the most functional and cost-effective manner.


About the Author

Bill Bedsole is a Speaker, Author and Consultant. Founder and President of the William Travis Group and author of the NextGen 360° Advanced BC methodology, he has more than 25 years working in the Disaster Recovery and Business Continuity industry. He can be reached at Tel: 847-303-0055, Fax: 847-303-0378.