Cloud success means embracing the edge and customer premises
Successful cloud deployments require balancing cost, latency, performance and resilience. And that means making use of all cloud models, including centralized public cloud as well as on-premises micro-clouds.
The cloud is everywhere
Most people think of the cloud (both public and private) as being solely hosted in large data centers, as shown at the right side of the image below.
But there are a wide range of cloud deployments, ranging from these large and centralized data centers all the way out to customer sites. And each of these locations will have different costs and benefits, supporting a range of different use cases and requirements. For example, centralized cloud deployments provide benefits in terms of on-demand elasticity and low capital investment for the user. They are ideal for multi-tenant hosting of applications or SaaS for a large number of users. But these clouds don’t fit all user requirements.
Drivers for edge cloud
End users are finding that they need options beyond traditional centrally deployed macro-clouds. Here are some of the drivers:
- Low latency. Some applications require low-latency access to compute resources. Examples include private 4G and 5G, IoT, augmented reality (AR), virtual reality (VR) and smart manufacturing.
- Reduced backhaul. Applications like video surveillance produce tremendous volumes of data, only some of which contain useful information. Applying local pattern recognition and analysis is a good way to reduce the volume of uplink traffic. By processing the video stream locally, the required uplink can be drastically reduced because only alerts or aggregated data are sent upstream.
- Data sovereignty. Some companies, industries and jurisdictions have requirements for keeping data local. A centralized cloud requires transporting data out of the specified area and so isn’t an option in those cases.
- Standalone resiliency. What happens to your mission-critical applications when the network links fail? If the applications are hosted centrally, you’re dead in the water. Local hosting provides the ability to keep going until the link is restored.
- Lower costs. A centralized cloud has a lot of benefits, but it can be very expensive for applications that have to run all the time. For example, the cost on Amazon Web Services (AWS) of reserving a single “a1.xlarge” instance (4 vCPU, 8Gbit memory) is $46.94/month, or $1690 over a period of three years. To enable a similar environment on a uCPE requires adding four CPU cores and 8Gbit of memory. That means a one-time incremental cost of about $800 – less than half the three-year cost in AWS.
Edge cloud is different
The growth in demand for edge computing means that clouds must be built at the far edge or customer edge. But these edge clouds are intrinsically different from hyperscale data center clouds.
- The scale is different. Edge cloud means that telcos or enterprises are managing thousands and tens of thousands of clouds at different sites. This requires centralized management and orchestration (MANO) that can handle the scale.
- The environments are less uniform. Hyperscale data center clouds are uniformly planned and controlled. Not so at the customer edge, where presenting the minimal footprint matters. That means being able to build a range of sizes from one server to many. And these clouds must be deployable over a wide range of power/heat and operating environments.
- Edge clouds require resilient and secure management access. Physically accessing edge clouds isn’t always easy. It may require scheduling or customer approval. Everything must be handled remotely in the edge cloud. Edge clouds must support remote management that is secure, is able to operate over multiple fiber and wireless networks and is resilient regarding hardware failures in order to minimize the need for onsite visits.
- Installation isn’t done by one team. Because they are deployed at the farthest points, the skill level of installation teams will vary. Entire clouds must be launched remotely using zero-touch provisioning (ZTP) so that installation can be handled by non-technical people. Cloud cabling should be simplified to the point that it’s difficult to create cabling mistakes.
These differences mean you can’t just take a commercial Linux distribution or standard cloud environment and deploy it at a customer site. You need special capabilities. ADVA has been addressing the unique characteristics of the edge cloud environment through its years of leadership developing and deploying universal CPE (uCPE). ADVA has now applied these techniques and principles to create Ensemble Cloudlet, making it easy to deploy a scalable, resilient cloud at the customer premises.
Ensemble Cloudlet makes edge deployments easy
How does Ensemble Cloudlet compare with traditional clouds? Ensemble Cloudlet provides a scalable edge cloud with localized cloud control, as shown below.
Cloudlet is simple
- ZTP of nodes – low-skill-level teams can do it
- Deployment from the same MANO as uCPE
- Easy cloud scale-up
Cloudlet is resilient
- Redundant local cloud controllers
- Redundant management access to each node
- Clustered and virtualized local storage
- Policies to migrate workloads between servers
Cloudlet is optimized for cost
- Automatic, remote encrypted management access
- Migration capabilities reduce application licensing costs and core usage
- More cost-effective solution than data center cloud
Broaden your thinking on the cloud
Whether you’re an end user or a service provider, the cloud is an essential part of your strategy. But you must consider all the relevant requirements and options, including at the customer premises. Ensemble Cloudlet can help ensure that your cloud strategy covers all the bases and is a success.
Please see this solution brief for more information on Ensemble Cloudlet.