How to shore up the DCI for the new economy

How is data center interconnect infrastructure evolving to meet the challenges of 5G, IoT and the post-pandemic era? Let’s explore the latest innovation in DCI networking. 
Arthur Cole
FSP 3000 TeraFlex CoreChannel

While there are still many unknowns in the post-pandemic era, one thing is certain: demand for data and services is on the rise. This means network infrastructure must expand at all levels, not merely in terms of bandwidth and throughput but in operational flexibility and overall efficiency as well.

A key part of this environment is the data center interconnect (DCI), which was already on a rapid growth trajectory before the demands of work/shop/live-from-home kicked into high gear amid all the lockdowns and quarantines. But what, if any, guiding principles should govern DCI development from here on out? And should the enterprise stress, say, bandwidth over throughput, flexibility over resilience? Or is there a way – without going over budget, of course – to accommodate all of these needs?

Bigger and better

As the old saying goes, where there’s a will there’s a way. And the fact is the even before the pandemic struck, the DCI was already undergoing a massive upgrade in preparation for the internet of things and 5G wireless services.

For one thing, DCI infrastructure is becoming much more modular and less expensive to provision with the advent of the 400ZR standard. To date, the most significant obstacle to streamlined DCI infrastructure has been the lack of a low-cost, high-density open platform that supports long-haul point-to-point communications. 400ZR supports these deployment models up of 120km, allowing organizations to devise best-of-breed solutions from among a wide range of vendor products, while at the same time integrating them directly into legacy SDN environments through open APIs.

ADVA has already begun demonstrating the efficacy of 400ZR with partners like Acacia and Inphi. In April, the three successfully displayed a 120km amplified link between the ADVA FSP 3000 open line system and pluggable 400ZR QFSP-DD transceivers from Acadia and Inphi. The test showed that multiple vendor solutions can support three-way line-side interoperability in real-world scenarios in which a 400Gbit/s WDM transport is placed next to test channels and fully loaded spectrum. This will allow organizations to implement DCI infrastructure at a fraction of the cost of current coherent transport systems, while at the same time enable switch and router companies to offer the same density for coherent DWDM and client optics within a single chassis.

The DCI is no different from any other network in that there is no such thing as enough bandwidth.
The next step

Of course, the DCI is no different from any other network in that there is no such thing as enough bandwidth. So even as 400Gbit/s solutions enter the channel, organizations are already looking forward to 800Gbit/s, which could be here sooner rather than later. According to 650 Group, spending on 800Gbit/s switching and routing for DCI applications will top $10 billion in 2025, driven in large part by new generations of onboard optics and ASICs capable of speeds up to 104.2Tbit/s. Naturally, this growth will be led by the hyperscale sector, although it will also encompass telecoms, colocation providers, traditional enterprises and others looking to provide backend support for increased edge traffic. 

Higher bandwidth often comes with a distance penalty, however. But new solutions like the TeraFlex CoreChannel overcome this problem by using a 140GBd sub-carrier to push transmission links to 1,600km. This provides the lowest cost-per-bit, while at the same time supporting 400, 100 and 10GbE operations to allow providers to better match bandwidth to traffic requirements and enable a smooth transition to higher data speeds.

Just like a PC is of marginal value unless it is connected to other PCs, so too is a data center without a DCI. The digital ecosystem is simply too vast for any one compute entity to provide effective support, even at hyperscale level.

Only by collectively managing data loads can the promise of today’s interconnected services be fulfilled. And for that you need top-notch long-haul networking between data centers.

Related articles