NFV facilitates new revenue streams for service providers, since virtualisation of network functions paves a range of opportunities to deliver innovative services with significant cost savings. This is made possible as a result of transitioning critical network functions - from specialised hardware to COTS server based platforms. However, while this paradigm shift offers the benefits of flexibility, elimination of vendor lockin and competitive service delivery, it also introduces uncertainties about performance, reliability, stability and scalability of VNFs. We will therefore consider additional measures for quantitatively evaluating NFV performance beyond traditional service benchmarking factors, from the viewpoint of VNF deployments.
KPIs for VNF Deployment
VNF deployment is usually measured in terms of speed, scalability, reliability and availability. These metrics were not relevant in traditional networks as network functions were always available and utilised as when required. But in the NFV case, the required network function in the form of a VNF is deployed only when it is needed for a service. Thus VNF deployment benchmarking becomes vital when considering factors for carrier-grade performance. The following are the metrics covered under VNF deployment KPIs.
VNF Deployment Latency
This metric measures the time taken to on-board a VNF, provision it and create a VNF-Forwarding Graph with an on-boarded VNF. This impacts service latency, and in the quality of service experienced by end users.
Also it should be noted that if this latency exceeds the maximum acceptable latency, VNF reliability impairment is said to have occurred.
VNF Deployment Reliability
This metric characterises the reliability of VNF-Forwarding Graph creation. It is the measure of number of successful creations of VNF-Forwarding Graph within the maximum acceptable latency. Even though this will not impact service latency, it impacts quality of service experienced by end users indirectly.
VNF Scale Factor
This metric measures the ability of VNF to scale up or down according to varying network conditions. Typically, service scaling up or scaling down will also happen due to request from end-user. It impacts the service delivery especially in addressing the varying needs of the end-user.
This metric measures the unavailability of VNF beyond maximum acceptable transient time. When that occurs, it is considered to be a service disruption, since it impacts the quality of service offered to end users.
KPIs for VNF Orchestration
VNF orchestration is a key capability for providing on-demand/pay-as-you-go service offerings and service scaling-up and scaling-down. VNF Orchestration benchmarking tracks the dynamic changes to the service offered to end-user. One of the benefits of NFV is to provide dynamic services based on end-user needs. Hence this KPI becomes relevant for offering carrier grade services. Within this KPI, a new set of metrics that are required to benchmark the VNF Orchestration capability, further emerge:
VNF Re-provisioning Latency
This metric provides a measure of time taken to dynamically chain a VNF to existing VNF-Forwarding Graph based on the user request. This impacts service latency, which in turn influences the quality of service experienced by end users.
VNF Re-provisioning Reliability
This metric characterises the reliability of VNF-Forwarding Graph change due to user request. It is the measure of number of successful changes to VNF-Forwarding Graph within maximum acceptable latency. Even though this metric does not impact service latency, the quality of service experienced by end users will indirectly be impacted.
The aforementioned KPIs and associated metrics mandate the need for defining the methods of measuring these parameters. These methods need to be aligned with the ETSI NFV MANO architecture based PoCs being carried out by service providers. Monitoring and diagnostic tools used in NFV architecture should incorporate these methods so that VNF performance metrics which impact service delivery could be easily determined.
About the authors
Bhuvan (firstname.lastname@example.org), is the Senior Manager - Technology and Innovation at Veryx Technologies, leading solution development in NFV, SDN, Cloud and Analytics. An active contributor to various industry forums, Bhuvan also drives Veryx efforts at IETF in defining SDN Controller benchmarking methodology.
Charanya (email@example.com) is the Product Manager at Veryx Technologies. Charanya handles product management and marketing efforts for Veryx solution in SDN, Cloud, NFV and emerging technologies.