Just as early man realized he could smelt together copper and tin to get bronze, today’s data-driven hominid is looking at virtualization, the cloud and advanced fiber networks to imagine an entirely new datacenter.
Instead of simply networking discrete facilities over long-haul trunks, why not disaggregate the various data resources, right down to the server and disk drive if necessary, and compile completely virtual data environments across large geographical areas? Not only would this enable highly specialized data architectures and infrastructure, but it would do wonders for load distribution, resource utilization and power management.
Small wonder, then, that so much attention is being paid to the data center interconnect (DCI). According to Ovum, the DCI market is projected to grow from $2.5 billion last year to $4.2 billion in 2019, a 10.5 percent compound annual growth rate. Much of this activity is happening at the carrier level, but an increasing portion is coming from the Internet content provider (ICP) market, which is gaining at about 12 percent per year.
The biggest limiting factor in the disaggregated DCI, of course, is latency. The farther data has to travel, the longer it takes to get there, even at light speed. The problem isn’t just building up bandwidth, says Datacenter Dynamics’ Michael P. Kassner, but essentially recreating a board-level interconnect over the wide area network. The emerging field of silicon photonics offers the best hope for a solution, but the technology is still immature for an architecture that ideally should span board-, component-, rack- and data-center-level infrastructure.
At the moment, many datacenter operators, outside of the hyperscalers like Google and Facebook, expect to gain instant disaggregation capabilities merely by deploying 100 Gbit/s optical links on their wide area or campus networks, according to Network Matter’s Rick Talbot. The problem is, most of these are built around point-to-point transfer of large data sets between discrete facilities. True disaggregation, however, involves traffic distribution over multiple transport wavelengths and is typically characterized by small data sets or streaming media. In this light, the data center interconnect should function more like a packet-switched network, ideally over an optical transport platform.
This is exactly where most of the DCI development activity is centered. Going forward, expect to see the development of disaggregated optical data center interconnects to focus on four key areas:
- The spine switch, which may or may not have integrated DWDM opticcs
- Terminal equipment, which will most likely contain a high-speed transport
- Open optical link systems
- Network management systems