The IETF currently has a draft specification with the Network Working Group that defines a new standard, the Network Services Header (NSH), for defining how Network Service Chains are controlled through a network.
The NSH concept aims to provide a means of constructing service chains that allow network administrators to define paths through the network utilising policy to ensure that classifications of traffic are treated in a certain way. NSH falls into the arena of Network Function Virtualisation (NFV) where services on the network (firewalls, load balancer, DDoS scrubbers etc.) can be dynamically connected together to form service chains. NSH aims to decouple a service chain from the topology supporting it by adding a Network Services Header before the transport header within a packet. NSH aware devices can perform actions based on the Network Services Header information while NSH-unaware devices simply forward the packet based on the outer transport header alone.
After an enlightening conversation with Greg Ferro (@etherealmind) over Twitter, we highlighted a few issues we see with this proposed standard in relation to it’s foundational construction and how it interfaces with existing SDN and NFV concepts.
1990s Solution to a Modern Day Problem
In the early days of networking it was perfectly alright to solve a new problem by adding another protocol to the stack, be it an encapsulation mechanism or a control plane protocol. Recently with SDN gaining popularity, people aren’t interested in making new data plane or control plane protocols that rely on per-device implementations as these will most likely prevent adoption. The advert of controller-based networking means that out-of-band control mechanisms such as OpenFlow, Open Daylight and the like are a much more scalable means of dictating policy on a network rather than another level of encapsulation.
The Network Services Header draft mentions some form of control plane protocol which falls outside of the draft specification making the proposed standard a semi-distributed protocol. When attempting to dictate policy within a network I personally feel that a totally centralised model will mean less overhead for network administrators and will enable easier adoption of the concept. For policy to be enforced the network does however need a way to identify and classify traffic flows – I don’t think it is entirely necessary however to add an additional encapsulation when control protocols such as OpenFlow already provide a means for traffic classification based on a centralised information controller.
In my opinion, the easiest way to dictate network policy is to use a metadata-based system that utilises existing protocols such as OpenFlow, MPLS, NVGRE and VXLAN* to control and identify network flows and enforce policy through their existing mechanisms. For example, MPLS can be used to transparently push traffic at L2/L3 from one point in the network to the next with a unique label-set for identification. There is no need to have multiple headers for context information for a flow as the label can identify the required policy from within a centralised controller.
*VXLAN is lacking in regard to some meta-data features but I’m assured change is coming.