After a decade of watching the cloud transform the data center world, the telecom and mobile industries are ready to go virtual as well. The mad crush of “on-demand demand” and new paradigms such as Internet of Things (IoT) are overwhelming telecom and mobile organizations already suffering from low revenue growth and escalating costs. Virtualization of the carrier-class network is the best strategy for restoring profitability and future growth in telecom and mobile. However, virtualization by itself is not enough. Virtualized solutions introduce a new set of complexities related to understanding what is happening in the network, as well as in integration with existing IT systems. Big data analytics can help with both problems. Let’s take a look at how virtualization will be used, and how big data analytics can help.
Progressive communication service providers (CSPs) and mobile operators are starting to apply virtualization technologies to their networks to lower costs and drive new revenue. Moving to a virtual networking environment allows use of common, lower cost hardware platforms, best-of-breed software functions and new software development tools and practices. All of these empower the CSPs to drive rapid and automated service creation, activation and assurance. Done well, virtualization can significantly reduce costs while providing tremendous upside through new revenue generation and higher profit margins. Network Functions Virtualization (NFV), in particular, provides a means to replace specialized hardware appliances, such as routers, with software running on Commercial Off The Shelf (COTS) servers.
Before we continue with how virtualization can help, let’s take a quick look at where we are starting. Figure 1 shows how a typical managed VPN service would be implemented using physical appliances (routers in this case).
Figure 1. Today’s VPN Service Using Routers
You can see there is a direct association or correspondence between the managed service and the underlying physical layer. We can use straightforward tools like ping and traceroute to run tests to determine where there are faults in the physical layer.
Virtualization, abstraction and cloud technology complicate the relationship between the service and its supporting infrastructure. By adding a layer of virtualization we decouple the end-user service from the underlying physical infrastructure of compute servers and networking, as shown in Figure 2.
Figure 2. Virtualized VPN Service Using NFV and Servers
Decoupling simplifies the creation and delivery of advance services. New services can be created and delivered without concern for the particulars of a given server or location. Instead, service developers can capture generic requirements (e.g., processor cores, memory, storage, network bandwidth, etc.) that must be satisfied when a service is instantiated via orchestration.
Another complication of virtualization is the abstraction and hiding of details. How do you perform service level assurance (SLA), which requires information from all layers? How do we address network issues? How do we enable dynamic scaling? These actions require insight into the end-user service as well as the virtual and physical services employed to deliver it.
Then, there’s the question of control. In a purely physical system it is easy enough to connect the needs of a service to the available physical resources. This same correlation is more complicated when resources are virtualized. In order to provide efficient management and control of virtualized services, real-time network and service analytics must be leveraged and fed back into the orchestration decisions. At Overture we refer to this as “actionable intelligence”.
A recent Infonetics Research report, “The Evolution of SDN and NFV Orchestration”, (www.infonetics.com) suggests a solution to this disconnection problem. Figure 3 is a diagram from Infonetics that shows how orchestration and control can be used to deliver advanced services over generic network hardware and COTS compute servers.
Figure 3. Using Analytics and Feedback to Control Virtualized Services
You can see on the right side of the diagram that data and analytics are required to provide feedback and to correlate between the behavior of the physical and virtual resources. In order to fully realize this vision we need to understand the implications on the data collection, storage and analytics. Following are some examples of services or applications, along with how big data and analytics can help.
Dynamic Scaling
NFV provides a path to elastic services by making use of horizontally and vertically scalable implementations of network functions. However, the following conditions must be met for a deployable service:
- Policy: The customer has signed up for and will pay for a dynamically-scaled service.
- Resources: There is adequate bandwidth and compute power to support a scaled-up service.
- Correlation: There must be a correlation between the service, virtual and physical layers to identify the resources currently in use as well as the relevant policies concerning scaling.
- SLA Measurement: In order to trigger a scale up or scale down, there must be sufficient data available from the proper sources and over the requisite time period.
- Scaling Application: There must be a control application that can access the raw data about policies, resources and service measurements and use the correlated data to make scaling decisions. Such analysis must include time-filtering and hysteresis to prevent resource flapping (i.e., rapidly repeated scale up and scale down operations). Analysis could also include predictive analytics that take into account long-term trends as well as repetitive behaviors based on time of day.
Big data analytics can help to meet the conditions above by providing a means to collect and store data from various sources such as policy engines, resource pools, and network performance management systems. The data is then correlated and provided to the control application for use in determining if and when to scale up or down.
Service Life Cycle
A key benefit of virtualized services is elasticity. However, an elastic service means that the physical and virtual resources consumed vary over time. Likewise, virtualized resources can gain resiliency by making use of a pool of servers. Elasticity and resilience are attractive attributes for a service, but their implementation does raise some questions:
- How does an operator ensure that the allocation of resources matches the service level promised to the end user?
- And how does the operator ensure that there are sufficient resources available to allow a scale up or a resource migration?
- And finally, how does the operator correlate the service to the underlying physical and virtual resources -- at any time in the service history?
With NFV big data analytics, CSPs can manage VNF life cycle tracking by providing historical trending on the performance characteristics of each monitored VNF to build up a behavioral profile as it relates to the virtual and physical resources used during operation. Big data analytics can also track VNF, network service and consumer service performance across NFVI-PoPs and use this information for network planning and real-time orchestration decisions.
Integration Into Other Systems
A huge barrier to deploying virtualized services is how to integrate them into existing IT/OSS/BSS systems. Operators spent decades developing these systems, and they are an essential part of how they conduct business. It is not easy to update them to support new modes of creating services.
It’s big data to the rescue again! Operators can simplify integration of new and legacy systems by gathering information from the new virtualized components, correlating or matching the data to services, then extracting it in a format that matches the needs of existing systems.
To virtualize for optimized cost reduction and revenue generation, operators must be able to correlate the physical and virtual layers so that services can be created, delivered and billed appropriately to customers with high expectations.
The key to successful deployment is a big data analytics solution that not only tracks and correlates what’s happening on the physical and virtual layers but also enables control and integration into existing systems. The goal is actionable intelligence that drives profitable service delivery -- and keeps your customers subscribing.