Same Old Convergence Song, New Verse

Todd Bundy
Road

The latest in the long line of single, unifying, convergence-protocol darlings is Data Center Bridging (DCB) or “lossless Ethernet,” but data-center managers have heard its siren song before.

Just as was the case with Fiber Data Distributed Interface (FDDI), with Asynchronous Transfer Mode (ATM) and with InfiniBand, the story with DCB goes that it will bring simplified, more cost-efficient order to corporate networking via one broad-shoulders fabric onto which all existing enterprise local and storage area network (LAN and SAN) applications will someday be collapsed. It is a nice thought, for sure; every data-center manager likes the idea of streamlining architectural and organizational complexities and freeing time and money to concentrate on creatively improving service for end users. And, indeed, there is value that DCB can deliver today—for I/O consolidation inside the server, for example.

As always, however, the devil is in the details. The mature protocols that emergent DCB is supposed to subsume have been developed and refined over years to deliver key features and functionality that are tailored to the unique applications that they enable.

Consider, for example, the flow-control mechanism that Fibre Channel employs to ensure the reliability and performance of mission-critical, real-time business continuity and disaster recovery (BC/DR) services. In order to support (with no packet loss) these most time sensitive of applications over distance, “buffer credits” are utilized by Fibre Channel switches across Dense Wavelength Division Multiplexing (DWDM) wavelengths in a metro network (usually less than 150km). Large enterprise customers will typically use native Fibre Channel over DWDM wavelengths to achieve the lowest latency and highest throughput for synchronous off-site storage replication of a company’s most critical data.

Unless the Ethernet switch venders drastically increase interface buffers on their 10GbE ports, then we fail to see how 10G Fibre Channel over Ethernet (FCoE)/DCB will have the reach and performance of native 10G Fibre Channel over DWDM.

In addition, the Priority Flow Control (PFC) used by lossless Ethernet is the exact opposite of Fibre Channel buffer-to-buffer credits. When the receiver starts running out of buffers, it has to send a message to stop the sender. What happens to the data in transit? Lost packets typically are not acceptable for SAN applications. In the Fibre Channel world, buffer-to-buffer credit starvation simply slows down the performance and throughput as shown in the diagram above. It doesn’t just switch off the sender.

Alternatively, the Fibre Channel traffic can be demultiplexed from the converged DCB/FCoE stream and then encapsulated for transport across Fibre Channel/Internet Protocol (FCIP) gateways across distance. But Fibre Channel SAN users already rely on FCIP gateways for this today when going extended distances beyond the metro (greater than 150km). So what is the point of adding the costs and conversion latencies introduced by the FCoE/DCB scheme on top of this?

One thing is for sure: Fibre Channel isn’t going away anytime soon. Deployment figures show that enterprise reliance on the protocol for valuable SAN services is actually growing. And the reasons are not strictly technology oriented, either. There are political/behavioral issues, as well. SAN and LAN are typically the responsibility of different managers at a given enterprise; how willing will the SAN manager be to entrust her or his “production storage traffic” to an unproven protocol that is being touted by the LAN group?

The case is that the data center is likely to be a heterogeneous protocol environment for years to come. So, what does this mean for data-center managers who are determined to realize the benefits of server and I/O consolidation, simplified management and power-cost reduction through innovations such as server virtualization? Those benefits are available—but not for the foreseeable future by merely deploying DCB or some other convergence fabric and leaving behind all the tried-and-true protocols that are serving enterprises today. WDM unifies native protocols at the physical layer, and a well-conceived implementation can deliver the differentiated degrees of tremendous bandwidth, ultra-low latency, modular security and other qualities that data-center managers need for their most innovative applications.

Related articles