Toward the Virtualized, NON-converged Data Center

Todd Bundy
Man working in datacenter

Think about the wires. It’s easiest to first think about the wires.

Imagine yourself sitting amid the hum of thousands of servers in a typical, growing, non-virtualized data center of a large financial enterprise, insurance company or research/educational institution. There might be dozens of dedicated servers—various application servers, Oracle and SQL servers, email and file servers, print and domain servers, etc. Now walk behind all of those machines, and what do you see?

Wires! Lots and lots of wires. There are multiple wires for multiple fabrics hanging off the back of each server—task-specific wires each for communications, computing, management, storage and the like. What a rat’s nest.

Toward the Virtualized, NON-converged Data Center

The process of “pulling” cables and ensuring the right wires go to the right places is time consuming, very costly and potentially disruptive.  It’s easy to understand why the people who manage large-enterprise data centers (and their budgets) are feeling pressure to become more efficient by virtualizing servers and converging networks onto one Layer 2 fabric that consolidates the “production” storage area network (SAN) onto the “user” local area network (LAN).

Non-virtualized data centers are sources of spiraling problems as networks grow. Each server is lightly utilized, conveying high capital expenditures (CAPEX) and poor utilization. CAPEX and operational expenditures (OPEX) are multiplied by reliance on dual fabrics utilizing lower-bandwidth links and by the multiple networking layers that must be addressed in supporting each server. An abundance of network, server and storage managers along with router, switch and security appliances translates to more unnecessary costs. Plus, performance for the time-sensitive (and, oftentimes, most critically important) applications is inhibited because latency is added with each processing step across the access, aggregation, core and edge layers of the data-center network.

Server virtualization and network convergence, on the other hand, offer data-center managers a highly compelling value proposition: server and I/O consolidation, simplified management, lower power and a physical infrastructure that is highly scalable and low latency.

Toward the Virtualized, NON-converged Data Center It all sounds so good. But now here’s some bad news.

Some large data centers are toying with the notion of storage virtualization via Fibre Channel over Ethernet (FCoE)—deploying the protocol and effectively shifting the existing enterprise SAN onto the LAN. There are many, many practical ramifications to contend with in this approach. Complications arise in sharing one physical device among several logical partitions; congestion, prioritization and queuing are among the technical issues that can manifest across links, servers, storage adapters, switches, etc. Plus, FCoE could entail a disruptive, expensive overhaul in the network core, demanding changeout of some existing gear to new low-latency Ethernet switches (and vendor lock-in). There is considerable reason to doubt that FCoE can deliver enough value to offset the costly infrastructure churn that it might require.

Furthermore, low latency, synchronous recovery and continuous availability are big problems with some non-standard implementations of FCoE, and those are must-have attributes for key enterprise Fibre Channel and InfiniBand applications. Because FCoE doesn’t offer sufficient reach, it must be configured in a flat, Layer 2 subnet across data-center connections of Layer 3 routers.  Over longer distances, this is what SAN users already have—but with proven Fibre Channel over Internet Protocol (FC/IP) gateways. In the metro (under 200km), companies are using native Fiber Channel over low-latency, physical-layer Dense Wavelength Division Multiplexing (DWDM) systems that are protocol agnostic. Why would they replace this infrastructure with new gateways and routers running FCoE over traditional Ethernet? This means bridging multiple fabrics. Don’t think so.

Toward the Virtualized, NON-converged Data Center Data-center virtualization is viable right now, but it is no simple undertaking. Data-center managers must re-examine their infrastructure partners, be on watch for downstream costs and/or performance issues and look to achieve the considerable benefits while continuing to support an array of protocols with a carefully considered implementation of flexible, protocol-agnostic WDM. As for the convergence pitches that sound too good and easy to be true? Well, they probably are.

What do you think?

Related articles