Let's Take a Bit of Time to Talk about Low Latency

Warped clock

Talking to customers who are interested in lowest latency solutions, the term "Bit Time" comes up over and over again. But it is not clear what it actually means in terms of low latency and whether a lower "Bit Time" is always better than a higher one. So let's try to explore the background and the implications here.

In general "Bit Time" stands for the time it takes to transmit a bit for a given network data rate. At Gigabit Ethernet (=1Gbps data rate) the "Bit Time" is 1/(1Gbps) = 1/109 = 1 nanosecond. In other words it takes 1 nanosecond to transmit a bit at 1Gbps. For higher data rates like 10GE the "Bit Time" is even shorter with just 0.1 nanoseconds = 100 picoseconds. But make no mistake - this does not say anything about the actual speed of the signal - it only indicates how many bits can be transmitted during a given second. The speed at which those bits travel over a fiber link is still subject to the maximum speed of light in a fiber, which is approximately 200,000 km/s.

So let's assume we would start a race between a GbE NIC (Network Interface Card) and a 10GbE NIC over a given fiber link. The first bit of the signal would leave both NICs at the very same time. But since the "Bit Time" with 10GE is shorter, the end of the corresponding bit would leave the 10GE NIC earlier than it would leave the GbE NIC. One could also think of it like the bit occupying a certain length on the transmission link. While the bit out of the 10GE NIC only occupies 2 cm, the bit out of the GbE NIC occupies 20 cm, but still their speed on the fiber is the same. So finally they arrive at the end of the link at the very same time. But while the 10GE link has already received the bit completely the GbE link is still waiting for the tail of its bit to arrive. In general the 10GE link has an advantage of 18 cm (20 cm - 2 cm) for each bit received, which translates into a time advantage of 9 picoseconds.

One would think that 9 picoseconds is not much, but unfortunately a normal message does not consist of one bit only and it is embedded inside an Ethernet frame. Normally each frame has to be received (including the CRC) in order to start the processing and forwarding. This, together with the protocol overhead, creates a delay consisting of all the bits and the overhead that make a frame. For the minimum Ethernet payload size of 64 bytes to which a protocol overhead of 22 bytes is added, this translates into a time difference of 620 nanoseconds. This value can go up to 11 microseconds for a frame carrying the maximum Ethernet payload size of 1500 bytes.

So, are a faster transmission speed and a shorter "Bit Time" the better options when it comes to implementing low latency networks? It depends on several factors:

  1. Processing data rates before and after the transport equipment It does not help a lot if the transmission speed is faster than the actual speed of the client interface that feeds the link or that receives the data. It is always the slowest element in the system that determines the overall processing speed.
  2. Amount of data If the average message that is sent over to the other side is not very large and can easily fit into a minimum sized Ethernet frame, the gain of going to higher data rates is not so big. On the other hand the advantage gets bigger as the amount of data that needs to be transported increases.
  3. Distance to be bridged Higher data rates are easier subject to noise and signal distortion. The longer the transmission distance, the worse the distortion problem. In order to bridge higher distances one would have to implement error correction schemes like FEC that always introduce additional latency due to their recursive working principle. A normal FEC for 10GE would introduce up to 30 microseconds of latency, something that would completely eliminate the gain of going to higher data rates.
The best compromise today seems to be the usage of 10GE data rates. 10GE is a common interface with most servers and switches today, and without FEC one could still achieve reasonable distances by using regenerator sites, which would allow to even connect sites that are several 100km apart from each other. For 100GE though, this approach will definitely not work. But that's a topic for another blog post. Stay tuned …

Related articles