Ultra-low Latency for Linking Data Centers

Todd Bundy
Fibre light

Certain local and storage area network (LAN and SAN) applications have such severe intolerance of packet delay that even deploying the latest and greatest network interface cards, high-capacity core network switches and multi-core servers will not necessarily ensure that required performance characteristics are achieved. Delivering ultra-low latency in connecting data centers demands its own targeted strategy.

How long does it take for a packet of data to make its way from one point in a network to another? That measurement of time is defined as the latency of a path, and it can be influenced by a tremendous range of factors. Data-center managers in the most latency-sensitive markets must steep themselves in an understanding of all of those factors if they are to give their companies optimal competitive advantage.

The first good, simple step, of course, is finding the shortest paths between one data center and another; one physical kilometer of fiber translates into five microseconds of one-way latency. Data rate and protocol also play critical roles for latency-sensitive applications. For instance, Fibre Channel is a highly reliable SAN protocol for production storage traffic that requires two round trips over these links, which effectively doubles the latency versus TCP/IP. And if you think that Fibre Channel over Ethernet (FCoE) will solve the problem, think again (see previous blog posts "Toward the Virtualized, NON-converged Data Center" and "Same Old Convergence Song, New Verse.") Data rate and optics also merits investigation; propagation delay through 1G is much high than 10G or 40G.

The second step is a technology choice. Time Division Multiplexing (TDM) and its Forward Error Correction (FEC) function for amplifying optical signals across fiber paths of distance inflates latencies; only Wavelength Division Multiplexing (WDM) is most appropriate for delay-sensitive applications.

There also exist key technological differences among the WDM solutions of different manufacturers with significant ramifications for latency. Thin film filters, for example, produce delays of varying increments depending on the wavelength color on which the given application is riding. Not only is the delay itself unnecessary; thin film filters also introduce complexity in that data-center managers or their service providers must account for the different latencies per wavelength.

 Additional Sources of Transport Latency

Wringing dozens of microseconds of additional latency from infrastructures is possible if data-center managers know where to look for the hidden pockets of delay along and within the WDM transport links among their facilities:

  • Color conversion—How the WDM system converts traffic to individual bands of light is another point of differentiation among equipment vendors. Low-latency transponders that carry out this function in the single-digit nanoseconds have emerged.
  • Amplification—The technologies used to boost optical signals as they inevitably weaken along fiber paths vary in their latency impact. Commonly deployed high-gain, dual-stage Erbium Doped Fiber Amplifiers (EDFAs), for example, can introduce microseconds of delay; whereas, latency-optimized architectures can carry out the same functions while injecting only half as much latency. Or, counter and co-propagating RAMAN amplifiers can be implemented for even more substantial latency improvements.
  • Dispersion compensation—In some networks, spools of dispersion compensation fiber (DCF) are deployed to counter the degradation of optical signals because of a common phenomenon, “chromatic dispersion,” in which especially high-speed signals (10Gbit/s or more) smear across the spectrum of wavelength colors. Chromatic dispersion can be alternatively combatted with the implementation of Fiber Bragg Gratings (FBGs) without incurring the latencies that are introduced with deploying kilometers of DCF.
  • Regeneration—Like amplification, regeneration is a necessary function if optical signals are to be protected from degradation as they are carried across fiber paths of significant distance. And as is the case with the different amplification techniques, the way a given WDM system performs regeneration substantially impacts path delay. Latency-optimized implementations can reduce delay from the hundreds of microseconds to only nanoseconds.

Electronic financial trading—in which a computer model’s interpretation of information feeds prompts automated buy and sell orders and must do so more quickly than the competition to best exploit price differences among global exchanges—remains today the most fiercely competitive of markets that are differentiated by network latency. Innovations in the technologies underlying electronic trading have reduced to mere nanoseconds the differentiation between today’s market winners and losers.

The technological advances driven by this industry figure to turn up the pressure on data-center managers and/or their service providers to shave nanoseconds of delay from transport of other applications—Internet gaming, video and business continuity, for example. Only Wavelength Division Multiplexing (WDM) solutions that have been engineered specifically to address hypersensitivity to latency will suffice for an enterprise’s most demanding LAN and SAN applications.

 

Related articles