Communication service providers will remember the days when service innovation cycles took many months or even years. The process was long and troublesome, starting with the operator’s idea for a new service, followed by detailed technical discussions with the network supplier, feasibility studies and estimations of related development efforts. Contracts for feature extensions were signed. As new hardware became available, system integration and testing had to be performed. Finally, a new service could be launched. By this stage, many months or even years had passed and the conditions that made the service attractive in the first place may no longer exist.
A brave new world of software
Today things are a lot better. With NFV and SDN, the separation between hardware and software has fundamentally changed the rules of service innovation. Now open servers are applied in a network and new services can be introduced with new software applications. Due to openness, there’s no need to have the software provided by the hardware supplier. A global market of ISVs (independent software vendors) is competing with solutions for immediate roll-out of new services. Previously lengthy and expensive innovation processes have been replaced by loading some trial software and getting it evaluated by a friendly customer. What a change for the service provider!
An NFV infrastructure makes life much easier. Open servers, open switches and software on top can provide legacy and future network functions. Innovation in hardware is now all about performance improvement while service innovation happens in the software layer.
But, is this the end-game architecture?
Back to hardware
Going forward, I expect that performance optimization will trigger additional architectural changes. Resource-intense network functions might over time move back into hardware, becoming physical network functions.
Let’s look at a typical examples of a service provider interested in rolling out security as a service. If the service provider is operating an NFV/SDN network, this can easily be done by introducing software-based encryption solutions. In a fast and efficient way, the service offering is rapidly extended with a high-value, new service. However, the encryption software eats up compute resources and limits the ability to add further revenue-generating services. As this security function becomes a general requirement, it’s quite likely that – over time – it incarnates into hardware and becomes a physical network function, again.
This re-allocation of software functions into hardware will happen at those sites with highest cost pressure and limitations in scalability, hence, preliminarily at the network edge.
In short, while service innovation previously resulted in the need for a time-consuming and expensive hardware development process, in the future, new services will be introduced as software applications. However, resource-intense functions will over time become re-incarnated from the virtual space and become physical network functions. Due to cost pressure at high-volume edge deployments, an efficient combination of physical and virtual network functions will become essential for the network edge.
Edge talk
If you’re interested in this topic, you might want to join the panel session, Network edge: What is the opportunity for communication service providers and how should they address this? It will be part of the Intel® Network Builders workshop at the Network Virtualization Europe event in Madrid from May 22 to 24.