Over the past thirty years, the idea of a cellular core has developed dramatically. From analog origins counting on circuit switching to the introduction of packet switching within the early Nineties, the primary era of cellular packet cores had been vendor home equipment with specialised {hardware}. An excellent instance of that is the Cisco ASR 5500, which tightly built-in {hardware} with software program to supply industry-leading reliability and efficiency. Though the ASR 5500 performs admirably, the technique of constructing, sustaining, and upgrading devoted home equipment is pricey, when every new era requires new customized parts like information processing boards for increased efficiency.
Advances in off-the-shelf {hardware} and open-source software program, akin to 25/40G NICs, SRIOV, DPDK, and VPP, have enabled the deployment of more cost effective cellular packet cores that meet the efficiency calls for of cellular community operators, and Cisco has led the {industry} on this space by creating the Cisco Extremely Packet Core for virtualized environments. This community operate virtualization (NFV) had a {hardware} value benefit over conventional home equipment however proved fragile because of the advanced NFV deployment architectures required to deploy digital community features (VNFs). In consequence, NFV deployments typically have extra operational prices than conventional appliance-based fashions.
The transition to 5G supplied a chance for the {industry} to leverage new know-how developed to deploy functions throughout private and non-private clouds. The 3GPP requirements physique encourages using cloud-native applied sciences and has emboldened the {industry} to deal with the decomposition of functions into composable microservices. By embracing a cloud-native structure, the {industry} is steering in a brand new route, away from the unreliability and complexity points that troubled the {industry}’s preliminary try at transitioning with virtualization.
Reliability, Operational Simplicity, and Scale
A Kubernetes-based cloud-native resolution was the plain selection for the way we went about constructing our Converged Core. Embracing Kubernetes supplies quite a few advantages, akin to fast utility growth, new CI/CD supply patterns, and higher resiliency fashions. Whereas Kubernetes is useful for managing the multitudes of containerized functions on this new cloud-native panorama, the pitfalls of reliability and complexity that plagued the early VNF deployments throughout the {industry} remained. As promising as cloud-native software program containers are, creating a converged core required marrying this new cloud-native strategy with a complete structure—an structure that had but to be outlined. After we started defining what a Converged Core structure could seem like, we wrestled with many decisions:
Alternative 1 – BareMetal vs Virtualized Deployments
In evaluating how we must always deploy our new Converged Core we thought of the present NFV structure with Kubernetes embedded throughout the VNFs or a BareMetal deployment mannequin. BareMetal grew to become the clear selection, it allowed us to simplify the answer and improve reliability by eliminating advanced and failure-prone elements of the earlier NFV structure. Gone had been the VNF supervisor, NFV orchestrator, VIM, hypervisor, and all of the complexity and friction that got here with these parts. What was left? A hardened Linux OS working on prime of UCS M5 {hardware}.
Alternative 2 – The Cloud-Native Stack
The Cloud Native Computing Basis (CNCF) panorama supplies an abundance of options for constructing a platform stack, even offering a useful map (https://panorama.cncf.io/) that engineers can use to visualise choices in constructing a cloud-native stack.
Our priorities in creating a brand new structure are rooted in simplicity and reliability, so we centered on including solely essential, mature CNCF parts to the stack, akin to Helm, ContainerD, Etcd, and Calico. Our guiding rule in growth was so as to add solely essential and mature options, aiming to maximise reliability and decrease complexity. For instance, to enhance reliability the Converged Core makes use of solely native storage volumes, because of this, we don’t require any cloud-native storage add-ons.
Alternative 3 – Managing Day-0 Set up and Day-N Upgrades
Managing day-0 set up / day-n upgrades of NFV architectures may be difficult with a number of integration factors into totally different orchestrators within the MANO stack, leading to lengthy integration instances and a comparatively fragile resolution. For the Cisco Converged Core group, a steady cloud-native stack was a crucial part, as was automated lifecycle administration for all layers – not simply the appliance layer. In consequence, Cisco developed a cloud-native cluster administration layer that ensures constant software program and tunings throughout all layers – BIOS settings, firmware, host OS, Kubernetes, and utility variations. This expertise is so easy that upgrading the Cisco Converged Core has grow to be a two-step operation – the 1st step, choose your new software program model after which step two, commit it to the cluster. To facilitate automation, the cluster administration layer supplies CLI, REST, and NETCONF interfaces. Help for a variety of interfaces permits seamless integration right into a cellular service supplier’s current automation resolution – akin to Cisco’s Community Service Orchestrator (NSO).
Alternative 4 – Managing Utility Configuration
When creating an answer just like the Cisco Converged Core, recognizing when to and when to not use new know-how is necessary. Utility configuration administration is considered one of these difficult areas. Historically, cellular service suppliers have managed utility configurations utilizing NETCONF/REST or CLI. With our new Converged Core, we are able to leverage current SP interfaces or use cloud-native choices like Kubernetes CRD or configuration maps. Our selection was the established order as a result of sustaining a standard administration interface would drastically simplify integration into the cellular service supplier’s configuration automation resolution.
Placing it collectively
By specializing in simplicity, reliability, and scale, we’ve developed an structure that allows service suppliers to handle 100s of Kubernetes clusters throughout 1000s of servers whereas serving thousands and thousands of subscribers.
For Extra Data
To be taught extra concerning the Cisco Converged Core, go to our product pages. To be taught extra about T-Cellular and Cisco’s Launch of the World’s Largest Cloud Native Converged Core Gateway, learn the December 2022 press launch.
Share:


