The Rise of Soft Switching Part II: Soft Switching is Awesome ™

[This series is written by Jesse Gross, Andrew Lambeth, Ben Pfaff, and Martin Casado. Ben is an early and continuing contributor to the design and implementation of OpenFlow. He’s also a primary developer of Open vSwitch. Jesse is also a lead developer of Open vSwitch and is responsible for the kernel work and datapath. Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware. All authors currently work at Nicira.]

This is the second post in our series on soft switching. In the first part (found here), we lightly covered a subset of the technical landscape around networking at the virtual edge. This included tagging and offloading decisions to the access switch, switching inter-VM traffic in the NIC, as well as NIC passthrough to the guest VM.

A brief synopsis of the discussion is as follows. Both tagging and passthrough are designed to save end-host CPU by punting the packet classification problem to specialized forwarding hardware either on the NIC or first hop switch, and to avoid the overhead of switching out of the guest to the hypervisor to access the hardware. However, tagging adds inter-VM latency and reduces inter-VM bisectional bandwidth. Passthrough also increases inter-VM latency, and effectively de-virtualizes the network thereby greatly limiting the flexibility provided by the hypervisor. We also mentioned that switching in widely available NICs today is impractical due to severe limitations in the on-board switching chips.

For the purposes of the following discussion, we are going to reduce the previous discussion to the following: The performance arguments in favor of an approach like passthrough + tagging (with enforcement in the first hope switch) is that latency is reduced to the wire (albeit marginally) and packet classification from a proper switching chip will noticeably outperform x86.

The goal of this post is to explain why soft switching kicks ass. Initially, we’ll debunk some of the FUD around performance issues with it, and then try and quantify the resource/performance tradeoffs of soft switching vis a vis hardware-based approaches. As we’ll argue, the question is not “how fast” is soft switching (it is almost certainly fast enough), but rather, “how much cpu am I willing to burn”, or perhaps “should I heat the room with teal or black colored boxes”?

So with that …

Why soft switching is awesome:

So, what is soft switching? Exactly what it sounds like. Instead of passing the packet off to a special purpose hardware device, the packet transitions from the Guest VM into the hypervisor which performs the forwarding decision in software (read, x86). Note that while a soft switch can technically be used for tagging, for the purposes of this discussion we’ll assume that it’s doing all of the first-hop switching.

The benefits of this approach are obvious. You get the flexibility and upgrade cycle of software, and compared to passthrough, you keep all of the benefits of virtualization (memory overcommit, page sharing, etc.). Also, soft switching tends to be much better integrated with the virtual environment. There is a tremendous amount of context that can be gleaned by being co-resident with the VMs, such as which MAC and IP addresses are assigned locally, VM resource use and demands, or which multicast addresses are being listened to. This information can be used to pre-populate tables, optimize QoS rules, prune multicast trees, etc.

Another benefit is simple resource efficiency, you already bought the damn server, so if you have excess compute capacity why buy specialized hardware for something you can do on the end host?  Or put another way, after you provision some amount of hardware resources to handle the switching work, any of those resources that are left over are always available to do real work running an app instead of being wasted(which is usually a lot since you have to provision for peaks).

Of course, nothing comes for free. And there is a perennial skepticism around the performance of software when compared to specialized hardware. So we’ll take some time to focus on that.

First, what are the latency costs of soft switching?

With soft switching, VM to VM communication effectively reduces to a memcpy() (you can also do page flipping which has the same order of overhead). This is as fast as one can expect to achieve on a modern architecture. Copying data between VMs through a shared L2 cache on a multicore CPU, or even if you are unlucky enough to have to go to main memory, is certainly faster than doing a DMA over the PCI bus. So for VM to VM communication, soft switching will have the lowest latency, presuming you can do the lookup function sufficiently quickly (more on that below).

Sending traffic from the guest to the wire is only marginally more expensive due to the overhead of a domain transfer (e.g. flushing the TLB) and copying security sensitive information (such as headers). In Xen (for example) guest transmit (DomU-to-Dom0) operates by mapping pages that were allocated by the guest into the hypervisor, which then get DMA’d with no copy required (with the exception of headers, which are copied for security purposes so the guest can’t change them after the hypervisor has made a decision). In the other direction, the guest allocates pages and puts them in its rx ring, similar to real hardware. These then get shared with the hypervisor via remapping. When receiving a packet the hypervisor copies the data into the guest buffers. (note: VMware does almost the same thing. However, there is no remapping because the vswitch runs in the vmkernel and all physical pages are already mapped and the vmkernel has access to the guest MMU mappings.)

So, while there is comparatively more overhead than a pure hardware approach (due to copying headers and the overhead of the domain transfer), it is in the order of microseconds and dwarfed by other aspects of a virtualized system like memory overcommit. Or more to the point, only in extreme latency sensitive environments does this matter (the overhead is completely lost in the noise of other hypervisor overhead), in which case the only deployment approach that makes sense is to effectively pin compute to dedicated hardware greatly diminishing the utility of using virtualization in the first place.

What about throughput?

Modern soft switches that don’t suck are able to saturate a 10G link from a guest to the wire with less than a core (assuming MTU size packets). They are also able to saturate a 1G link with less than 20% of a core. In the case of Open vSwitch, these numbers include full packet lookup over L2, L3 and L4 headers.

While these are numbers commonly seen in practice, theoretically, throughput is affected by the overhead of the forwarding decision — more complex lookups can take more time thus reducing total throughput.

The forwarding decision involves taking the header fields of each packet and checking them against the forwarding rule set (L2, L3, ACLs, etc.) to determine how to handle the packet. This general class of problem is termed “packet classification” and is worth taking a closer look at.

Soft Packet Classification:

One of the primary arguments in favor of offloading virtual edge switching to hardware is that a TCAM can do a lookup faster than x86. This is unequivocally true, TCAMs have lots and lots of gates (and are commensurately costly and have high power demands) so that they can do lookups of many rules in parallel. A general CPU cannot match the lookup capacity of a TCAM in a degenerate case.

However, software packet classification has come a long way. Under realistic workloads and rule sets that are found in virtualized environments (e.g. mult-tenant isolation with a sane security policy), soft switching can handle lookups at line rates with the resource usage mentioned above (less than a core for 10G) and so does not add appreciable overhead.

How is this achieved? For Open vSwitch, which looks at many more headers than will fit in a standard TCAM, the common case lookup reduces to the overhead of a hash (due to extensive use of flow caching) and can achieve the same throughput as normal soft forwarding. We have run Open vSwitch with hundreds of thousands of forwarding rules and still achieved similar performance numbers to those described above.

Flow setup, on the other hand, is marginally more expensive since it cannot benefit from the caching. Performance of the packet classifier in Open vSwitch relies on our observation that flow tables used in practice (in the environments we’re familiar with) tend to have only a handful of unique sets of wildcarded fields. Each of these observed wildcard sets has its own hash table, hashed on the basis of the fields that are not wildcarded. Therefore classifying a packet requires a O(1) lookup in each hash table and selecting the highest-priority match. Lookup performance is therefore linear in the number of unique wildcard sets in the flow table. Since this tends to be small, classifier overhead tends to be negligible.

We realize that this is all a bit hand-wavy and needs to be backed up with hard performance results. Because soft classification is such an important (and somewhat nuanced) issue, we will dedicate a future post to it.

“Yeah, this is all great. But when is soft switching not a good fit?”

While we would contend that soft switching is good for most deployment environments, there are instances in which passthrough or tagging is useful.

In our experience, the mainstay argument for passthrough is reduced latency to the wire. So while average latency is probably ok, specialized apps that have very small request/response type workloads can be impacted by the latency of soft switching.

Another common use case for passthrough is a local appliance VM that acts as an inline device between normal application VMs and the network. Such an appliance VM has no need of most of the mobility or other hypervisor provided goodness that is sacrificed with passthrough but it does have a need to process traffic with as little overhead as possible.

Passthrough is also useful for providing the guest with access to hardware that is not exposed by the emulated NIC (for example, some NICs have IPsec offload but that is not generally exposed).

Of course, if you do tagging to a physical switch you get access to the all of the advanced features that have been developed over time, all exposed through a CLI that people are familiar with (this is clearly less true with Cisco’s Nexus 1k). In general, this line of argument has more to do with the immaturity of software switches than any real fundamental limitation. But it’s a reasonable use case from an operations perspective.

The final, and probably most widely used (albeit least talked about) use case for passthrough is drag racing. Hypervisor vendors need to make sure that they can all post the absolute highest, break-neck performance numbers for cross-hypervisor performance comparisons (yup, sleazy), regardless of how much fine print is required to qualify them. Why else would any sane vendor of the most valuable piece of real estate in the network (the last inch) cede it to a NIC vendor? And of course the NIC vendors that are lucky enough to be blessed by a hypervisor into passthrough Valhalla can drag race with each other with all their os-specific hacks again.

“What am I supposed to conclude from all of this?”

Hopefully we’ve made our opinion clear: soft switching kicks mucho ass. There is good reason that it is far and away the dominant technology used for switching at the virtual edge. To distill the argument even further, our calculus is simple …

Software flexibility + 1 core x86 + 10G networking + cheap gear + other cool shit >
       saving a core of x86 + costly specialized hardware + unlikely to be realized benefit of doing classification in the access switch.

More seriously, while we make the case for soft switching, we still believe there is ample room for hardware acceleration. However, rather than just shipping off packets to a hardware device, we believe that stateless offload in the NIC is a better approach. In the next post in this series, we will describe how we think the hardware ecosystem should evolve to aid the virtual networking problem at the edge.


The Rise of Soft Switching (Part I: Introduction and Background)

[This series is written by Jesse Gross, Andrew Lambeth, Ben Pfaff, and Martin Casado. Ben is an early and continuing contributor to the design and implementation of OpenFlow. He’s also a primary developer of Open vSwitch. Jesse is also a lead developer of Open vSwitch and is responsible for the kernel work and datapath. Andrew has been virtualizing networking for long enough to have coined the term “vswitch”, and led the vDS distributed switching project at VMware. All authors currently work at Nicira.]

How many times have you been involved in the following conversation?

NIC vendor: “You should handle VM networking at the NIC. It has direct access to server memory and can use a real switching chip with a TCAM to do fast lookups. Much faster than an x86. Clearly the NIC is the correct place to do inter-VM networking.

Switch vendor: “Nonsense! You should handle VM networking at the switch. Just slap a tag on it, shunt it out, and let the real men (and women) do the switching and routing. Not only do we have bigger TCAMs, it’s what we do for a living. You trust the rest of your network to Crisco, so trust your inter-VM traffic too. The switch is the correct place to do inter-VM networking.

Hypervisor vendor: “You guys are both wrong, networking should be done in software in the hypervisor. Software is flexible, efficient and the hypervisor already has rich knowledge of VM properties such as addressing and motion events. The hypervisor is the correct place to do inter-VM networking.

I doubt anyone familiar with these arguments is fooled into thinking they are anything other than what they are, poorly cloaked marketing pitches charading as a technical discussion. This topic in particular is a focus of a lot of marketing noise. Why? Probably because of its strategic importance. The access layer to the network hasn’t opened for over a decade and with virtualization, there is the opportunity to shift control of the network from the core into the edge. This has to be making the traditional hardware vendors pretty nervous.

Fortunately, while the market blather decries this as a nuanced issue, we believe the technical discussion is actually fairly straightforward.

And that is the goal of this series of posts, to explore the technical implications of doing virtual edge networking in software vs. various configurations of hardware (Passthrough, tagging, switching in the NIC, etc.) The current plan is to split the posts into three parts. First, we’ll provide an overview of the proposed solution space. In the next post, we’ll focus on soft switching in particular, and finally we’ll describe how we would like to see the hardware ecosystem evolve to better support networking at the virtual edge.

Not to spoil the journey, but the high-order take-away of this series is that for almost any deployment environment, soft switching is the right approach. And by “right” we mean flexible, powerful, economic, and fast. Really fast. Modern vswitches (that don’t suck) can handle 1G at less than 20% of a core, and 10G at about a core. We’re going to discuss this at length in the next post.

Before we get started, it’s probably worth talking to the bias of the authors. We’re all software guys, so we have a natural preference for soft solutions. That said, we’re also all involved in Open vSwitch which is built to support both hardware (NIC and first hop switch) and software forwarding models. In fact, some of the more important deployments we’re involved in use Open vSwitch to drive switching hardware directly.

Alright, on to the focus of this post, background.

At it’s most basic, networking at the virtual edge simply refers to how forwarding and policy decisions are applied to VM traffic. Since VMs may be co-resident on a server, this means either doing the networking in software, or sending it to hardware to make the decision (or a hybrid of the two) and then back.

We’ll cover some of the more commonly discussed proposals for doing this here:

Tagging + Hairpinning: Tagging with hairpinning is a method for getting inter-VM traffic off of a server so that the first hop forwarding decisions can be enforced by the access hardware switch. Predictably, this approach is strongly used/backed by HP ProCurve and Cisco.

The idea is simple. When a VM sends a packet, it gets a tag unique to the sending VM, and then the packet is sent to the first hop switch. The switch does forwarding/policy lookup on the packet, and if the next hop is another VM resident on the same server, the packet is sent back to the server (hairpinned) to be delivered to the VM. The tagging can be done by the vswitch in the hypervisor, or by the NIC when using passthrough (discussed below).

The rational for this approach is that special purpose switching hardware can perform packet classification (forwarding and policy decisions) faster than software. In the next post, we’ll discuss whether this is an appreciable win over software classification at the edge (hint, not really).

The very obvious downsides to hairpinning are twofold. First, the bisectional bandwidth for inter-VM communication is limited by the first hop link. And it consumes bandwidth which could otherwise be used exclusively for communication between VMs and the outside world.

Perhaps not so obvious is that paying DMA and transmission costs for inter-VM communication increases latency. Also, by shunting the switching decision off to a remote device, you’re throwing away a goldmine of rich contextual information about the state of the VM’s doing the sending and receiving, as well as (potentially) the applications within those VMs.

Regarding the tags themselves, with VNTag, Cisco proposes a new field in the Ethernet header (thereby changing the Ethernet framing format, and thereby requiring new hardware, … ). And HP, a primary driver behind VEPA, decided to use the VLAN tag (and or source MAC) which any modern switching chipset can handle. In either case, a tag is just bits, so whether it is a new tag (requiring a new ASIC), or the use of an existing tag (VLAN and MPLS are commonly suggested candidates), the function is the same.

Blind Trust MAC Authentication: Another approach for networking at the virtual edge is to rely on MAC addresses to identify VMs. It appears that some of the products that do this are repurposed NAC solutions being pushed into the virtualization market. Yuck.

In any case, assuming there is no point of presence on the hypervisor, there are a ton of problems with this approach. They rarely provide any means to manage policy of inter-VM traffic, they can be easily fooled through source spoofing, and often they rely on hypervisor specific implementation tricks to determine move events (like looking for VMWare RARPs). Double yuck. This is the last we’ll mention of this approach as it really isn’t a valid solution to the problem.

Switching in the NIC: Another common proposal is to do all inter-VM networking in the NIC. The rational is twofold, if passthrough is being used (discussed further below) the hypervisor is bypassed, so something needs to do the classification. And secondly, packet classification is faster in hardware than software.

However, switching within the NIC hasn’t caught on (and we don’t believe it will to any significant degree). Using passthrough obviates many advantages of virtualization (as we describe below), and DMA’ing to the NIC is not free. Further, switching chipsets on NICs to date have not been nearly as powerful as those used in standard switching gear. The only clear advantage to doing switching in the NIC is the avoidance of hair-pinning (and perhaps shared memory with the CPU via QPI).

[note: the original version of this article conflated SR-IOV and passthrough which is incorrect.  While it is often used in conjunction with passthrough, SR-IOV itself is not a passthrough technology]

Where does Passthrough fit in all of this? Passthrough is a method of bypassing the hypervisor so that the packet is DMA’d from the guest directly to the NIC for forwarding (and vice versa). This can be used in conjunction with NIC-based switching as well as tagging and enforcement in the access switch. Passthrough basicaly de-virtualizes the network by removing the layer of software indirection provided by the hypervisor.

While the argument for passthrough is one of saving CPU, doing so is a complete anathema to many hypervisor developers (and their customers!) due to the loss of the software interposition layer. With passthrough, you generally loose the following: memory overcommit, page sharing, live migration, fault tolerance (live standby), live snapshots, the ability to interpose on the IO with the flexibility of software on x86, and device driver independence.

Regarding this last issue (hardware decoupling), with passthrough, if you buy new hardware, prepare to enjoy hours of upgrading/changing device drivers in hundreds of VMs. Or keep your old HW and enjoy updating the drivers anyway because NICs have hardware errata. Also enjoy lots of fun trying to restore from a disk snapshot or deploy from a template that has the old/wrong device driver in it.

To be fair, VMware and others have been investing large amounts of engineering resources into address this by performing unnatural acts like injecting device driver code into guests. But to date, the solutions appear to be intrusive and proprietary workarounds limited to a small subset of the available hardware. More general solutions, such as NPA, have no declared release vehicle, and from the Linux kernel mailing list appear to have died (or perhaps put on hold).

This all said, there actually are some reasonable (albeit fringe) use cases for Passthrough, such as throughput sensitive network appliances or single packet latency messaging applications. We’ll talk to these more in a later post.

Soft Switching: Soft switching is where networking at the virtual edge is done in software withing the hypervisor vswitch, and without offloading the decision to special purpose forwarding hardware. This is far and away the most popular approach in use today, and the singular focus of our next post in this series, so we’ll leave it at that for now.

Is that all?

No. This isn’t an exhaustive overview of all available approaches (far from it). Rather, it is an opinion-soaked survey of what has the most industry noise. In the next post, we will do a deep dive into soft switching, focusing in particular on the performance implications (latency, throughput, cpu overhead) in comparison to the approaches mentioned above.

Until then …


An Extremely Brief Conceptual Introduction to Open vSwitch

Overview:

Open vSwitch is one of my favorite open source projects.   For those of you who aren’t familiar with it, it’s a switch stack which can by run both as a soft switch (vswitch) within a virtualized environment)and as the control stack for hardware switches. Good stuff.

However, the real kung-fu is that Open vSwitch is built for programmatic state distribution. It does this through two interfaces. One being OpenFlow (with a ton of extensions) for managing the forwarding behaviour of the fast path, and the second being a JSON-RPC based config protocol used for less time critical configuration (tunnels, QoS, NetFlow, etc.). You can view the schema for the config protocol here.

Use:

So, what might you do with something like Open vSwitch? Well, the idea is to enable the creation of automated network infrastructure for virtualized environments.

For example, you could use a centralized control system to manage network policies policies that migrate with VMs. This has already been done within the XenServer environment with the Citrix Distributed vSwitch Controller.

Open vSwitch also supports programmatic control of address remapping (L2 and L3 a la OpenFlow), and programmatic control of tunnels as well as multiple tunnel types (e.g. GRE, IPsec, and CAPWAP). These come in handy for various network virtualization functions such as supporting mobility or an L2 service model across subnets.

Open vSwitch is already used in a bunch of production environments. It is most commonly used as a vswitch in large cloud deployments (many thousands of servers) for automated VLAN, policy, and tunnel management. However, I know of a number of deployments which use it as a simple OpenFlow switch, or a more sophisticated programmatic switch to control hardware environments.

Performance:

Open vSwitch is fast. Damn fast. Some performance tests have shown it to be faster than the native Linux bridge. Open vSwitch uses flow-caching (when running in software), so even under complex configurations the common case should be blazingly fast. Open vSwitch also has highly optimized tunneling implementations.

Compatibility:

Open vSwitch is primarily developed and deployed in Linux (however there are ports to other OSes, particularly those used in embedded environments). It is commonly used with both Xen and KVM (there are production environments using both). Further, it has been integrated into a number of cloud management systems including Xen Cloud Platform, OpenQRM, and OpenNebula (a long with a bunch of proprietary CMSes).  It’s currently being integrated into Open Stack.  You can track the progress here.

Why Do I Care?

Mostly because with Open vSwitch, as with other distributed switch solutions, it’s possible to build really sophisticated networks with not-so-sophisticated hardware.  For example, an L3 network from your neighborhood OEM or other other low-cost hardware vendor (check out, for example, Pronto), plus Open vSwitch and a bit of programming can equate to a cheap, bad-ass network for virtual deployments.  But that, my friends, is a topic for another post.