In a networked and cloud-based business era, broadband may be the most essential pieces for business continuity and disaster recover. It's easy to plan for the expected -- but disasters rarely follow what is written down on a plan conceived on a sunny weekday between 9 to 5. Let me strongly recommend that you review, verify and sanity check your business continuity plans for 2014. If you don't find at least one area to change after a full review, you might want to reconsider how valid your review is in the first place.
I write this based upon a recent, ongoing disaster recovery saga taking place at a major organization recently affected by the Polar Vortex of 2014. I can't be more specific about who is involved or what happened, but I can provide a general outline of events. Like many affected buildings along the U.S.'s Northeast during the week of January 6, sub-zero temperatures caused a water pipe to freeze and burst on an upper floor at about "oh dark thirty" in the morning. Water poured down from the pipe to the floors below, making its way through wiring closets and the organization's server room.
A humidity alarm went off at about 3 AM, with a text message/email notification going out to IT staff. Unfortunately, the humidity sensors tended to send out false alerts in the past, so the first response was to attempt a remote reset of the alarm and see if it cleared the problem.
By 5:30 AM, the poor IT guy on call arrives at work to find water throughout the building -- including the server room. He hit the red button emergency power cutoff, but the combination of rolling and standing water and humidity had already done their evil work to everything from storage to servers to UPS units that typically tend to be at floor level due to their weight. (Key safety note: A soaked UPS is a dangerous thing even after the power is turned off. It is more than likely still holding a charge and water can cause corrosion if units are not quickly dried. Can we say severe electrical hazard?).
At this point, the damage is done. Water makes most everything in the server room a write-off due to the duration of exposure and the amount of time it will take to stop the water, get everything drained out and safely assess what can be moved (Soaked UPSes, remember). Basic wiring needs to be inspected before the power goes back on. Ceiling and floor tiles need to be opened up and dried as a short term measure because nobody wants a black mold problem with soggy ceiling tiles proving to be a gooey write-off.
How would your business recovery plan be doing at this point? All network services are down at this point, all servers are g-o-n-e. Did I mention the phone system got soaked too, between the wiring closets and the VoIP servers in the server room, so there's no inbound or outbound dial tone in the building?
Needless to say, there's a lot of bandwidth needed just for recovery. Once new servers are brought up, off-site backups have to be copied onto them. A lot of organizations keep their primary (large) bandwidth connection or connections in the server room, with secondary connections in another location, but run into trouble in recovery operations when they find that the secondary connection isn't big enough to handle the sudden glut of data necessary for off-site restoration. Complicating matters can be a failure to provision a scalable bandwidth solution with a vendor able and willing to turn up the pipe when you need it. I've heard of several "Oh no!" moments where secondary links have failed to scale when they were needed the most, followed by knockdown fights with the secondary service provider to get that is needed.
Another factor to consider is the impact of off-site cloud solutions upon disaster recovery, especially when you add voice into the mix. You need much more bandwidth if you are suddenly thrust into moving from a dead server room with a dead IP PBX to a cloud solution. Unless you've carefully calculated the sudden switch from on-site to cloud for voice and added that into the requirements for your burst-able/scalable secondary provider(s), you'll find yourself coming up short.
My biggest takeaway here is any secondary providers need to be able to provide as much -- if not more, factoring in the need for cloud solutions to replace in-house services -- bandwidth as you currently get from your primary provider or providers. The solution should look as similar as possible to the primary provider, including router and Ethernet equipment, with the secondary provider delivering bandwidth using a scalable Ethernet solution to have you running at full needed capacity within minutes of a phone call. Fiber using Ethernet is the only bandwidth delivery option for both primary and secondary providers as it is the only one that can rapidly scale in the simplest fashion.