Network requirements across the distributed enterprise are becoming more complex. From server-to-server mesh fabrics to long-haul virtual networks, applications are demanding more speed, more flexibility and more reliability.
At the same time, enterprises, cloud providers and even metro carriers are tasked with driving down the cost of network expansion and operations, which is accomplished largely by streamlining infrastructure and making more use of available bandwidth.
In the middle of all this is the data center interconnect (DCI). Not only does it provide the vital link between remote data sites and across campus data environments, it is increasingly finding its way inside the warehouse-sized facilities of the emerging hyperscale industry. In this regard, it is a prime target for improved bandwidth flexibility and an overall architectural slim-down.
The key challenge here is to enhance the data provider’s ability to handle a wide variety of data streams while still enabling the rapid scalability that rising and falling workloads require. According to Ovum Research, video is now a mainstream application in both the consumer and B2B markets, which puts pressure on the DCI to provide wideband links at a moment’s notice should one data center have to retrieve a large video file from another. At the same time, countless other small volume requests are pinging back and forth as applications like e-commerce, web serving and even industrial control churn away. This trend will only get more pronounced as big data and the Internet of Things pushes data processing resources out to the network edge.
Indeed, without greater bandwidth flexibility, the DCI could suffer the same problems that the public Internet is having right now: increasing latency, spotty performance and poor reliability. It’s almost ironic, notes Teridion’s Dave Ginsburg, that while the Internet can now practically read our minds to figure out what we want, it still takes forever to load a simple website due to latency between the site host and the ad server. Greater visibility into network bottlenecks and increased use of real-time data gathering and analytics will help in this regard eventually, but is there anything to be done in the short term to provide the kind of bandwidth flexibility that will speed things up for everybody?
While a more flexible DCI won’t solve all of the Internet’s woes, it will address the twin concerns of faster throughput and more efficient utilization of resources. At the moment, 100Gbit/s connectivity is all the rage, which makes sense given the increased data loads the enterprise is experiencing. But not every link needs to be that wide, which is why many existing DCI platforms contain a mix of 10, 40, 100 and even 400Gbit/s ports (with maybe some 25 and 50Gbit/s solutions before too long, perhaps?) – the better to accommodate increasing scalability requirements.
But this still does not address the flexibility issue. Wouldn’t it be better if the network provider could easily configure large ports as multiple small ones? This would allow organizations to scale from 10 to 40 to 100Gbit/s without duplicating hardware, and driving up both capital and operating costs. At the moment, only ADVA Optical Networking has this capability with the new MicroMux™ module on the FSP 3000 CloudConnect™ platform. In this way, a single port can address the needs of short, intermediate and long-range interconnects using either single or multi-mode fiber. At the same time, it can seamlessly push 10, 40 and 100Gbit/s streams over CloudConnect™’s 400Gbit/s line card without increasing the footprint of the overall system.
As the DCI gains in importance, the demands placed upon it will mount. Traditionally, the answer to any networking problem – whether it was throughput, bandwidth, latency or the like – was to simply provision more resources. By building DCI infrastructure with bandwidth flexibility in mind, providers gain the ability to meet the emerging challenges of a distributed data environment without going overboard on space, power consumption or cost.