Expand your bandwidth, but manage it as well

Woman poiting at virtual connections

The enterprise’s insatiable demand for bandwidth shows no signs of letting up any time soon. In fact, if current trends continue, it’s about to expand dramatically both inside the data center and on wide area infrastructure.

But adding bandwidth is not simply a matter of widening the pipe. More bandwidth means more complexity, faster throughput and improved layering and segmentation to accommodate workloads of various types and sizes. And it all has to be done with an eye toward the future so you don’t end up repeating the process a few short years from now.

According to Cisco’s latest Global Cloud Index, data center traffic is on pace to triple between 2016 and 2021, jumping from an already hefty 6.8Zbit per year to a staggering 20.6Zbit per year. At the same time, workloads and compute instances are on the rise, nearly doubling in the data center and tripling in the cloud. More importantly for networking, however, is the fact the workload and instance densities will climb from 2.4 to 3.8 in the data center and from 8.8 to 13.2 in the cloud. This means bandwidth needs are not just increasing overall, but between and within servers as well.

Small wonder, then, that optical transceivers in the 100Gbit/s range and up are making their way into enterprise networks. Orbis Research reports that the 400Gbit/s optical market is on pace to top USD 22.6 billion in the next five years, driven by a combination of internet traffic, online commerce, streaming, video, social networking and the rising tide of cloud and SaaS platforms. These devices will not only be smaller, less expensive and draw less power than today’s form factors, but they will also be increasingly intelligent, capable of managing and orchestrating the large volumes of data that pass through every second. You can expect them to be more modular as well, which offers a higher degree of flexibility compared to today’s discrete optical components.

The main challenge facing most organizations today, however, is not adding bandwidth per se, but figuring out how much bandwidth they will require going forward. Tech Target’s Jessica Scarpatti, says that the process should begin with the right questions, namely, what apps are running and what are their service level requirements? While it may be tempting to estimate bandwidth by the number of users, this can skew the results because bandwidth is often a function of how the network is being used, not by how many. A network analyzer will help determine how many bytes a given app is sending per second, which can then be used to calculate the maximum number of simultaneous users for that app.

As more workloads shift to the cloud, it’s important for the enterprise to maintain a handle on their WAN bandwidth consumption as well. But as Comcast’s Kevin O’Toole noted recently, most MPLS services provide poor visibility into this crucial metric, leaving the enterprise with only a limited capability to track down and correct bottlenecks. Along with optical long-haul connectivity, a key upgrade is the adoption of SDN and NFV architectures, which can be built more around application-centric models than the raw movement of bits and bytes. With centralized management and end-to-end visibility into app performance, the enterprise can adopt a more holistic approach to traffic, resolving congestion and conflicts in real time and in many cases proactively circumventing them altogether.

Feeding the bandwidth beast will likely be a crucial function for the enterprise going forward. But as with any diet, there are right ways to eat and wrong ways. Too much raw capacity with little or no planning and management can lead to bloat and unnecessary costs, but too little leaves you with insufficient resources to provide quality service.

Fiber optics both within the data center and across the cloud can provide the bandwidth you need, but it will take a certain amount of effort to make sure it is utilized in an optimized fashion.

Related articles