One of the more positive developments following the recent string of high-profile natural and man-made calamities is the awareness it has raised among enterprise executives for the need for robust disaster recovery programs. Traditionally, disaster recovery was seen as primarily a cost center — and an expensive one, at that.
Fortunately, at the same time concerns over business continuity and service integrity were growing, a number of technological developments were converging to not only improve enterprises’ ability to keep functioning after an outage, but to lower costs and even utilize DR resources during normal operating periods without compromising their ability to step up in times of emergency.
One of these, obviously, is virtualization. By decoupling data and applications from underlying physical infrastructure, enterprises gain tremendous flexibility in provisioning new operating environments and balancing data loads across available resources. As enterprises gain more experience virtualizing server environments in the drive for greater consolidation, it is widely expected that DR will emerge as one of the technology’s first secondary applications.
While (virtualization) has made its way into 90 percent of the medium and large business market, only 37 percent of servers in that market are virtualized,” says Nathan Coutinho, solutions manager for virtualization at systems distributor CDW. “So customers are going to focus on trying to virtualize the rest of their servers, which means looking closely at as-yet unvirtualized applications to determine if they are compatible with a virtual platform. Following more widespread implementation or adoption the next phase will be to build out disaster recovery (DR) plans and to start looking at both desktop virtualization and building private clouds.
In a way, disaster recovery and virtualization form a symbiotic relationship. If virtualization simplifies and enhances disaster recovery capabilities, DR also provides for a safer, more resilient environment in which to virtualize. Multiple surveys have shown that one of the primary reasons enterprises are not extending virtualization into mission-critical systems is that they fear for the integrity of that critical data. As a key piece of the data availability pie, a robust DR architecture can go a long way toward alleviating those concerns.
Whenever the subject of disaster recovery comes up, there is a tendency to invoke images of hurricanes, tidal waves, earthquakes and nuclear annihilation. But more mundane outages caused by system failure or human error are far more disruptive to enterprise productivity, and profits, than a major disaster.
Evidence suggests, though, that most enterprises don’t take the problem of smaller outages very seriously, or at best consider them part of routine data center operations.
“In general, enterprises are probably not doing enough to protect themselves from more mundane outages,” says Paul Egger, vice president of global operations at contact center provider TELUS International. “Unfortunately, there is not a lot of industry statistics on disaster recovery effectiveness due to the fact that many business interruptions are rarely reported and if they are, they are highly underestimated. Much of the loss of productivity and revenue may go unnoticed in many enterprises.”
This is surprising because gearing a disaster recovery program to more typical outages would help leverage the cost of the system and the resources it consumes, and it would help keep it in good working order through a series of mini workouts.
Many of the newest DR platforms are taking a closer look at correcting the smaller problems, while still keeping any eye on catastrophic failures. Marathon Technologies, for example, recently released the everRun MX Extend system that combines fault tolerance and DR technology with CA’s ARCserve replication system to create an end-to-end application availability system that kicks in during all types of outages. The system features localized failure protection as well as remote failover that by-passes compromised infrastructure.
Virtualization also vastly improves the functionality of off-site backup infrastructure through technologies like cloud computing. Once enterprise users are adept at provisioning their own environments using a range of geographically disperse resources, the loss of one data center, or a cluster of centers for that matter, won’t bring operations to a standstill.
One critical component of off-site virtualized disaster recovery is WAN optimization. When systems go down, a sudden influx of traffic on the wide area network can be nearly as disruptive as a general failure. That’s part of the reason the top virtualization platforms have embraced a range of optimization technologies. VMware and Microsoft users, for example, can take advantage of the HyperIP optimizer from NetEx, which maintains transfer speeds up to 800 Mbps. An added bonus is that the system can streamline routine operations like physical-to-virtual migrations, putting what would otherwise be an idle cost center to good use.
No matter how DR facilities are set up, we are talking about an entirely new data environment that must be maintained and managed in both good times and bad. As these environments become more complex, so do the management responsibilities.
“Even though DR has come a long ways, enterprises still face several management challenges, beginning with initial configuration and setup,” says Vish Mulchand, director of software product marketing at 3PAR. “Particularly in scenarios where a multi-site, multi-mode disaster recovery configuration is concerned, implementing a DR strategy typically requires professional services for configuration and testing, and deployment can take months.
“Once a DR strategy is in place, regular testing, which is needed to ensure that the implemented DR strategy will actually work in the event of a disaster, can also pose problems,” he added. “Testing is necessary to ensure protection, but it is often time-consuming, and figuring out ways to perform routine testing without disrupting production can prove challenging. Another major challenge is configuring and testing the DR program as it applies to the entire data center, not just the storage piece of the puzzle.”
Many of these hassles can be alleviated, if not outright eliminated (for the user, at least), by outsourcing it to a third-party provider, as long as you’re comfortable hosting data outside your own infrastructure.
“We are now seeing many hosted service providers (HSPs) offering Disaster Recovery-as-a-Service (DRaaS) leveraging the cloud, and this trend will continue to grow,” Mulchand says. “Disaster recovery issues will evolve, however, the challenges will remain,” says TELUS’ Egger. “As enterprises architect to include terminal services and thin client workstations, the workstation dependencies of many applications are removed, which can simplify the recovery challenge. In a way, it goes back full circle to the days of mainframe centralized computing — but with virtualization, the playing field gets flatter and cheaper. With cloud computing, your ability to outsource some of the risk and complexity is enhanced, further aggregating scale and reducing the cost of high availability.”
The BIG PAYOFF, however, is the minimal downtime should the unthinkable occur. Visit TFCBooks for the latest technology tips.