Imagine you are a business entity needing office space. You buy an office floor. You put together the furniture, build a pantry and affix the security system. Before long, you realize you need some meeting rooms. Given the real estate costs, you decide to just rent an office floor and refurbish it into meeting rooms and some additional cubicles. Soon you discover that you need studio space for a couple of weeks throughout the year. You realize that neither owning office space nor renting it really meets the needs of your growing business. So you opt for flexible workspace rentals. What you do is just rent meeting rooms, cubicles and studio rooms as and when you need them, adjusting your office space based on your project needs, saving on overheads and having the resources that you need ready at hand.
The move to Cloud for many enterprises is just that. At one time, businesses were content deploying their own IT infrastructure, adding more hardware and appliances as transactions and operations grew. They then saw the limitations of proprietary hardware in scaling growth and saw how cloud and virtualization gave them the computing resources they needed to deploy and run their applications on a flexible, on-demand model.
Transitioning to cloud-native networks
For operators, adoption of virtualization in their own data centers via the use of virtualized network functions (VNFs) and service chaining meant that they can now deploy network services as and when these are required. This led to increased scalability and a more flexible management of networks.
However, the move to virtualization alone was not enough. Despite reducing the complexities at the hardware layer, the software layers saw increased loads. This was when the cloud-native architecture pioneered by Netflix, and adopted by the likes of Google, Adidas, Slack, Uber and Nokia became of interest. In a cloud-native architecture, chunky monolithic application codes are broken down into smaller codes or microservices, which are then packaged into containers, for example in Docker containers, and orchestrated via Application Programming Interface (APIs) using platforms such as Kubernetes.
For operators, microservices provide massive improvements in terms of resource efficiencies. VNFs can now be deployed as Cloud Network Functions (CNFs) in the form of a family of microservices linked via APIs. This enables network services to be assembled and disassembled within a cloud or across clouds, creating cloud native networks that feature highly optimized usage of available storage, computing, memory and networking resources.
Within this new architecture, CNFs can be turned on or off, scaled up or down and replicated based on the current needs of the network traffic. Microservices can also be shared and reused. When changes to network services are due, they need not be revamped in totality but rather enhanced or replaced in smaller components, facilitating the adoption of DevOps for continuous upgrades in the network.
Efficiency gains however, are not the only push factors for the adoption of cloud-native networks.
5G from the cloud
With the advent of 5G, logical networks built along specific Service Level Agreements (SLAs) will be delivering bandwidth and a unique mix of network services for hundreds of different use cases. From enhanced Mobile Broadband (eMBB) to Ultra Low-Latency Communications (URLLC) to Massive Machine-Type Communications (mMTC) to Fixed Wireless Access (FWA), deployment of 5G necessitates the creation of hundreds of virtual networks matched up to various applications such as autonomous driving, eHealth, smart cities, smart stadiums and cloud robotics.
To cater for such a diverse traffic range, operator networks both at the core and the edge must support real-time traffic management which in turn requires network services to be available and deployable on demand. With the microservices architecture, CNFs can be created, replicated and transported seamlessly from one cloud to another across the network, enabling real-time active management of traffic. It is not surprising that cloud-native architectures are already being featured across the virtualized Evolved Packet Core (vEPC) and the new 5G core, as networks inch closer to the full deployment of 5G.
DPI for cloud-native networks
While flexible provisioning of CNFs provides networks the flexibility they seek, this can only be made possible through real-time traffic awareness. To support 5G’s plethora of use cases that require a different mix of bandwidth, latency, coverage and security, real-time traffic awareness is required to apply these microservices by application types to create a scalable and powerful network core and network edge.
This is also where Deep Packet Inspection (DPI), deployed as either a VNF or a CNF, plays one of the most significant roles in delivering the promise of a truly flexible and resource-efficient network. Deployed at the user plane within the network core or network edge, DPI extracts IP traffic metadata in real-time, enabling identification of traffic and security threats, and feeding other VNFs and CNFs in the service chain, for example, PCRF and container firewalls.
Take our network traffic visibility tool R&S®PACE 2
for example. It has been widely deployed as a virtualized DPI engine supported by a comprehensive, weekly-updated signature library. Within a cloud native network environment, R&S®PACE 2
can be deployed as a microservice along with other DPI-related microservices such as IP classification, authentication, intrusion detection, policy control, billing and charging, and analytics. The family of microservices provide a powerful IP traffic detection and classification capability that is able to support a diverse range of use cases for 5G and beyond.
One of the interesting points to note at this juncture is that while cloud-native architecture builds on the initiatives in the network virtualization space, it is here that the use of real-time insights on network behavior and performance will intensify and will be put to test. These insights, provided by technologies such as DPI, are not just limited to network-level data but will cover insights at every cloud-, CNF-, microservice-, application- and user-level. Regardless of whether it is the enhanced vEPC or the new 5G core, optimizing existing resources to the needs of the traffic requires deeper and faster insights. These insights, built into the network can fuel efficiencies across the network, extending from the core to the edge, and will be the cornerstone of successfully transitioning to the cloud-native networks of the future.