Low latency for Dummies

Fiber lights

Besides cloud computing and virtualization, low latency seems to be another buzz word of the year. But reading through all the publications can get a little confusing. While the business benefits of having a lower latency (=faster) network than your competitor are obvious and understandable the technology behind it becomes a little blurry. Let me try to explore some of the myths that exist around low latency technology.

The higher the bandwidth the lower the latency People often confuse transport speed with transport capacity and often tend to think bigger (capacity) is better. Let’s look at an analogy from the real world: No one would honestly claim that a bus that can carry 100 passengers can reach its destination faster than a sports car that only transports 2 passengers. The bus has a higher capacity but this does not mean that it has a faster speed. If you only want to get from A to B as fast as possible you generally take the sports car. In addition, getting on and off the bus will add to the travel time where the sport car passengers can take their seats very quickly. In networking terms this means that larger capacity is not necessarily faster speed – in a lot of cases larger capacity actually means slower performance. A larger capacity transmission typically requires additional processing and error correction in order to successfully send the signal on a fiber optic link – this can add a lot of latency.

Some equipment can transport data faster over fiber than others Although it would be nice: the maximum achievable speed of any information transmission cannot be faster than the speed of light in vacuum. Since fiber optic cables have a refractive index that is around 1.47 the actual speed of light in fiber optic cables is slower by exactly that factor. This results in a transport latency that is around 5 microseconds per km. As long as your transport is happening over the same standard fiber (there are some “faster” fiber cables deployed on a small scale) and since warp drive has not been developed yet all transport is subject to the same delay.

Optoelectrical processing in each intermediate node has no effect on latency and offers additional value in regards to service granularity and virtualization Each time a signal goes through an optoelectrical process there is additional latency that is added to the transport. Hence, in most cases, latency can be reduced by staying optical as long as possible. Today’s optical components even allow signal amplification, switching and re-routing in optical domain.

Electrical signal processing can improve the signal latency In most cases the electrical processing of signals (including multiplexing and the introduction of forward error correction (FEC)) generates additional delay. The problem can even get worse if the algorithms are not latency optimized. The safest approach seems to be touching the signal as little as possible.

So after discussing some of the myths, the question still stands what is the optimum solution for lowest latency transmission? Generally one should try to stay in the optical domain as long as possible. Perhaps more importantly, the transport solution should seamlessly adapt to the transport protocol and data rate being used by the end device. If a server works best with a 1GE signal than this should be the signal transported. This implies that the search for the lowest latency network doesn’t start with the transport network but with the optimum choice for the interconnection protocol within the server. Last but not least, generally the low hanging fruit for squeezing time out of your system is changing your optical transport network. Optical equipment changes combined with shorter fiber routes can provide significant savings – remember that for each km saved the earnings are 5 microseconds.

To read more about low-latency optical transport, please click here.

Related articles