The need to change the data center network from a hierarchal one to a leaf-spine or more meshed one has arisen from the ever-increasing need for compute and storage access and the use of server virtualization. Now that many applications can run on one server, if that server gets overloaded, virtual machines (applications) need to move to another server quickly—four hops through the network is not quick, so network architecture is changing to accommodate the mobility of VMs. So now what used to be just a client-server standard three-tier network is starting to look like a HPC cluster.
With the demand for server density comes blade servers and while they have not been adopted as quickly as anticipated, their installation rate will increase over the next five years. The use of blade servers will help facilitate mobility of VMs as well as transitioning into a software defined network (SDN) model. And with SDN, the idea of top-or-rack (ToR) or end-or-row (EoR) switching architectures will become obsolete and the SDN switch controller will just be incorporated into the software. However, the full implementation of this, while on the horizon, will not be realized rapidly; in the interim – perhaps five to seven years, we still see healthy growth of both ToR and EoR architectures in the data center.
Data center Ethernet server connections are currently transitioning from Gigabit Ethernet to 10G, which in turn is pushing the access, aggregation/distribution and core switches to 40G and 100G. In 2014, the majority of data center servers are still connected with 1G Ethernet, but in 2015, the majority will be 10G. SFP optical modules are used for Gigabit Ethernet connections to servers, though most of the server connections will remain copper RJ45 ones. As they move to 10G, the majority will remain copper with either RJ45 CAT6A or SFP+ direct-attach copper (DAC) cables.
With Ethernet data rate progression fueled by server network connection upgrades and access switch location closer to servers via ToR or EoR configurations, the aggregation/distribution portion of the network is transitioning from copper to fiber. Actually, this phenomenon started even with 1G connections at servers because the 10GBASE-T 100m (or more than 10m) switch-port took so long to materialize. But now, with 10GBASE-T or 10GBASE-CR4 (using copper twinax) connections at the server, the uplinks are either multiple 10GBASE-SR (using LOMF) or 40GBASE-SR4 (using LOMF). This is where fiber will gain most of its momentum. While these links will be mostly 10G or multiple 10Gs to start, by 2019 there will be a healthy market for 40G connections in this part of the network. Data center 10G optical transceivers will use SFP+. The CFP and QSFP+ will be used for 40G, but QSFP+ is expected to take over in the long term. For 100G optical transceivers, there are six form factors – CXP, CFP, CFP2, CFP4, CPAK, QSFP28. Since Cisco owns and makes the CPAK, it will get a large share of the market over the next five years. However, the QSFP28 is expected to dominate switch connections from other equipment manufacturers. In fact, many of the top transceiver manufacturers have decided to only support certain variants in some of these form factors because they see them as transient products. The table below shows the plan. Short reach optical variants will continue to dominate the data center since the majority of connections remain less than 50m. However, there is still a need for a cost-effective option for those few links that are longer than 150m. There are four MSAs now vying for this space—100G CLR4 Alliance, CWDM4, OpenOptics and PSM4. This represents the positioning of the vendor community when it comes to how to address this part of the market. Which solution will ultimately win is completely up in the air at this point because none have shown a compelling cost advantage over the existing LR4 variant or against each other.
Another emerging trend is to develop a 25G connection for ToR (or EoR for that matter) switch to server. There is a call for interest (CFI) in front of the IEEE 802.3 task force now that is being spearheaded by Microsoft. We believe that this will take hold in large Internet Data Centers (IDCs) within the next five years. In fact, Microsoft, Mellanox, Arista Networks, Broadcom and Google recently formed a 25G Ethernet Consortium to move this along.