There have been many articles describing overlay networks in the past few quarters. Its a relatively straightforward concept, not far removed from some of the older VPN technologies very popular a while ago. The actual transport of packets is probably the simplest, it is the control plane that is much harder to construct and therefore explain. It is therefore also that the control plane in overlay networks has seen the most innovation and change, and is likely to change some more in standard and proprietary ways in the next little while. A perfect example is the use of IP Multicast for unknown, multicast and broadcast traffic as defined in the latest IETF draft for VXLAN, but controller implementations try and avoid IP Multicast as part of the necessary data path. Which will continue to lead to changes in the control plane for learning, distribution of destinations, etc.
A Plexxi solution provides an optimized L1, L2 and L3 network. With the advent of overlay networks, the relationship and interaction between the physical, L2 and L3 network and the overlay infrastructure is important to understand. We strongly believe the control and data planes should be interconnected and coordinated/orchestrated. In this and next week’s blog, I will describe some key touch points of the two at the data plane: entropy as a mechanism to discern flow like information and the role and capabilities of a hardware gateway.
I looked at VXLAN, NVGRE and STT as the major overlay encapsulations. VXLAN and STT are very much driven by VMWare, with STT used as the tunnel encapsulation between vSwitch based VXLAN Tunnel End Points (VTEP), VXLAN used as the tunnel encapsulation to external entities like gateways. NVGRE of course is the tunnel protocol of choice for Microsoft’s overlay solution and very similar to to previous GRE based encapsulations. All encapsulations are IP based, allowing the tunnels to be transported across a basic IP infrastructure (with the above mentioned note for IP Multicast). VXLAN and NVGRE are packet based mechanisms, each original packet ends up being encapsulated into a new packet.
VXLAN is build on top of UDP. As shown below, an encapsulated ethernet packet has 54 bytes of new header information added (assuming it is being transported again over ethernet). The first 18 bytes contain the ethernet header containing the MAC address of the source VTEP and its next IP destination, most likely the next IP router/switch. This header changes at each IP hop. The next 20 bytes contain the IP header. The protocol is set to 17 for UDP. The source IP address is that of the originating VTEP, the destination IP address that of the destination VTEP. The IP header is followed by 8 bytes of UDP header containing source UDP port, destination UDP port (4789) and the usual UDP length and checksum fields. While formatted in a normal way, the UDP source port is used in a special way to create “entropy”, explained in more detail below.
A VXLAN Encapsulated Ethernet Packet
Following the UDP header is the actual 8 byte VXLAN header. Just about all fields except the 24 bit VXLAN Network Identifier (VNI) are reserved and set to zero. The VNI is key, it determines which VXLAN the original packet belongs to. When the destination VTEP receives this packet and decapsulates it, it will use this to find the right table to use for MAC address lookups of the original packet to get it to its destination. Only the original packets (shown with Ethernet headers above) follows the VXLAN header. For every packet sent out by a VM, VXLAN adds 54 bytes of new tunnel headers between the source and destination VTEP. Intermediate systems do all their forwarding based on this new header: ethernet switches will use the Outer Ethernet header, IP routers will use the Outer IPv4 header to route this packet towards its destination. Each IP router will replace the Outer Ethernet header with a new one representing itself as the source, and the next IP router as the destination.
NVGRE packets look very similar to VXLAN packets. The initial Outer Ethernet header is the same as VXLAN, representing the source tunnel endpoint and the first IP router as the source and destination. The next 20 bytes of IP header are also similar to VXLAN, except that the protocol is 47 for GRE. NVGRE encodes the Virtualized LAN (Virtual Subnet ID or VSID in NVGRE terms) inside the GRE header, using 24 bits of the original GRE Key field to represent the VSID, leaving 8 bits for a FlowID field, which serves a similar entropy function as the UDP source port for VXLAN, explain further below. The VSID in NVGRE and VNI in VXLAN represent the overlay virtual network ID for each of the technologies. Following the GRE header, the original (Ethernet) packet. NVGRE added 46 bytes of new header information to existing packets.
A NVGRE Encapsulated Ethernet Packet
As I mentioned in last week’s blog, a tunnel endpoint is an aggregation point and as a result, all of the individual flows that are put into a specific VTEP to VTEP tunnel go through the transport network based on the new headers that have been added. Many networks rely on some form of L2 or L3 ECMP to use all available bandwidth between any two points on the network, spine and leaf networks being the prime example of an absolute dependency on a very well functioning ECMP to perform at its best. Without discussing the virtues of ECMP again, tunneled packets need something in the new header that allows an hash calculation to make use of multiple ECMP paths. With pretty much all of the L2 and L3 header identical (except for the VNI or VSID) for all traffic between two tunnel endpoints, the creators of these encapsulations have been creative in encoding entropy in these new headers so that hash calculations for these headers can be used to place traffic onto multiple equal cost paths.
For VXLAN, this entropy is encoded in the UDP source port field. With only a single UDP VXLAN connection between any two endpoints allowed (and necessary), the source port is essentially irrelevant and can be used to mark a packet with a hash calculation result that in effect acts as a flow identifier for the inner packet. Except that it is not unique. The VXLAN spec does not specify exactly how to calculate this hash value, but its generally assumed that specific portions of the inner packet L2, L3 and/or L4 header are used to calculate this hash. The originating VTEP calculates this, puts it in the new UDP header as the source port, and it remains there unmodified until it arrives at the receiving VTEP. Intermediate systems that calculate hashes for L2 or L3 ECMP balancing typically use UDP ports as part of their calculation and as a result, different inner packet flows will result in different placement onto ECMP links. As mentioned, intermediate routers or switches that transport the VXLAN packet do not modify the UDP source port, they only use its value in their ECMP calculation.
NVGRE is fairly similar. GRE packets have no TCP or UDP header, and as a result network hardware typically has the ability to recognize these packets as GRE and use the 32-bit GRE key field as an information source in their ECMP calculations. GRE tunnel endpoints encode inner packet flows with individual (but not necessarily unique) key values, and as a result, intermediate network systems will calculate different hash results to place these inner packet flows onto multiple ECMP links. NVGRE has taken 24 of these bits to encode the VSID, but has left 8 bits to create this entropy at the tunnel endpoint, the field has been renamed FlowID. The VSID and FlowID combined will be used to calculate hashes for ECMP link placement. A possible challenge is that for networks that have many many flows inside a VSID between two specific NVGRE endpoints, the 8 bits worth of differentiation may not create a “normal” ECMP distribution.
While the packet formats have been constructed to ensure that the “normal” tools of entropy can be used for ECMP and LAG by existing switching hardware, the latest hardware platforms have the ability to look well beyond the outer headers. Many bits and pieces of the new headers can be examined and decisions can be made on them. While specific switching ASICs will have slightly different tools, the latest generations of them have he ability to look at VNI and VSID even when not acting as a gateway, and packet modification or forwarding decisions can be made on their value. Inner MAC and IP headers can also be examined and acted on, with a bit more complexity. Switching ASICs are built to have quick access to the most important fields to make decisions on, access to less common fields is there, but requires some manual construction by those that program the ASIC (the networking vendors).
When the switching platform is configured to be a gateway to provide bridging functions between regular VLANs and the tunneled VXLAN or NVGRE infrastructure, the ASIC has access to the entire original packet, since it actively encapsulates or decapsulates the original packet. That gives the switch decision choices very similar to a vSwitch, but at a smaller scale. More detail on the gateway function and STT next week.
[Added 9/18/2014: For those that wonder who the gentleman in the picture us, his name is Rudolf Clausius (1822-1888), a German physicist and mathematician and generally considered one of the founders of science of thermodynamics and the creator of the entropy concepts]
The post Overlay Entropy appeared first on Plexxi.