Of Mice and Elephants

[This post has been written by Martin Casado and Justin Pettit with hugely useful input from Bruce Davie, Teemu Koponen, Brad Hedlund, Scott Lowe, and T. Sridhar]

Overview

This post introduces the topic of network optimization via large flow (elephant) detection and handling.  We decompose the problem into three parts, (i) why large (elephant) flows are an important consideration, (ii) smart things we can do with them in the network, and (iii) detecting elephant flows and signaling their presence.  For (i), we explain the basis of elephant and mice and why this matters for traffic optimization. For (ii) we present a number of approaches for handling the elephant flows in the physical fabric, several of which we’re working on with hardware partners.  These include using separate queues for elephants and mice (small flows), using a dedicated network for elephants such as an optical fast path, doing intelligent routing for elephants within the physical network, and turning elephants into mice at the edge. For (iii), we show that elephant detection can be done relatively easily in the vSwitch.  In fact, Open vSwitch has supported per-flow tracking for years. We describe how it’s easy to identify elephant flows at the vSwitch and in turn provide proper signaling to the physical network using standard mechanisms.  We also show that it’s quite possible to handle elephants using industry standard hardware based on chips that exist today.

Finally, we argue that it is important that this interface remain standard and decoupled from the physical network because the accuracy of elephant detection can be greatly improved through edge semantics such as application awareness and a priori knowledge of the workloads being used.

The Problem with Elephants in a Field of Mice

Conventional wisdom (somewhat substantiated by research) is that the majority of flows within the datacenter are short (mice), yet the majority of packets belong to a few long-lived flows (elephants).  Mice are often associated with bursty, latency-sensitive apps whereas elephants tend to be large transfers in which throughput is far more important than latency.

Here’s why this is important.  Long-lived TCP flows tend to fill network buffers end-to-end, and this introduces non-trivial queuing delay to anything that shares these buffers.  In a network of elephants and mice, this means that the more latency-sensitive mice are being affected. A second-order problem is that mice are generally very bursty, so adaptive routing techniques aren’t effective with them.  Therefore routing in data centers often uses stateless, hash-based multipathing such as Equal-cost multi-path routing (ECMP).  Even for very bursty traffic, it has been shown that this approach is within a factor of two of optimal, independent of the traffic matrix.  However, using the same approach for very few elephants can cause suboptimal network usage, like hashing several elephants on to the same link when another link is free.  This is a direct consequence of the law of small numbers and the size of the elephants.

Treating Elephants Differently than Mice 

Most proposals for dealing with this problem involve identifying the elephants, and handling them differently than the mice.  Here are a few approaches that are either used today, or have been proposed:

  1.  Throw mice and elephants into different queues.  This doesn’t solve the problem of hashing long-lived flows to the same link, but it does alleviate the queuing impact on mice by the elephants.  Fortunately, this can be done easily on standard hardware today with DSCP bits.
  2. Use different routing approaches for mice and elephants.  Even though mice are too bursty to do something smart with, elephants are by definition longer lived and are likely far less bursty.  Therefore, the physical fabric could adaptively route the elephants while still using standard hash-based multipathing for the mice.
  3. Turn elephants into mice.  The basic idea here is to split an elephant up into a bunch of mice (for example, by using more than one ephemeral source port for the flow) and letting end-to-end mechanisms deal with possible re-ordering.  This approach has the nice property that the fabric remains simple and uses a single queuing and routing mechanism for all traffic.  Also, SACK in modern TCP stacks handles reordering much better than traditional stacks.  One way to implement this in an overlay network is to modify the ephemeral port of the outer header to create the necessary entropy needed by the multipathing hardware.
  4. Send elephants along a separate physical network.  This is an extreme case of 2.  One method of implementing this is to have two spines in a leaf/spine architecture, and having the top-of-rack direct the flow to the appropriate spine.  Often an optical switch is proposed for the spine.  One method for doing this is to do a policy-based routing decision using  a DSCP value that by convention denotes “elephant”.

Elephant Detection

 At this point it should be clear that handling elephants requires detection of elephants.  It should also be clear that we’ve danced around the question of what exactly characterizes an elephant.  Working backwards from the problem of introducing queuing delays on smaller, latency-sensitive flows, it’s fair to say that an elephant has high throughput for a sustained period.

Often elephants can be determined a priori without actually trying to infer them from network effects.  In a number of the networks we work with, the elephants are either related to cloning, backup, or VM migrations, all of which can be inferred from the edge or are known to the operators.  vSphere, for example, knows that a flow belongs to a migration.  And in Google’s published work on using OpenFlow, they had identified the flows on which they use the TE engine beforehand (reference here).

Dynamic detection is a bit trickier.  Doing it from within the network is hard due to the difficulty of flow tracking in high-density switching ASICs.  A number of sampling methods have been proposed, such as sampling the buffers or using sFlow. However the accuracy of such approaches hasn’t been clear due to the sampling limitations at high speeds.

On the other hand, for virtualized environments (which is a primary concern of ours given that the authors work at VMware), it is relatively simple to do flow tracking within the vSwitch.  Open vSwitch, for example, has supported per-flow granularity for the past several releases now with each flow record containing the bytes and packets sent.  Given a specified threshold, it is trivial for the vSwitch to mark certain flows as elephants.

The More Vantage Points, the Better

It’s important to remember that there is no reason to limit elephant detection to a single approach.  If you know that a flow is large a priori, great.  If you can detect elephants in the network by sampling buffers, great.  If you can use the vSwitch to do per-packet flow tracking without requiring any sampling heuristics, great.  In the end, if multiple methods identify it as an elephant, it’s still an elephant.

For this reason we feel that it is very important that the identification of elephants should be decoupled from the physical hardware and signaled over a standard interface.  The user, the policy engine, the application, the hypervisor, a third party network monitoring system, and the physical network should all be able identify elephants.

Fortunately, this can easily be done relatively simply using standard interfaces.  For example, to affect per-packet handling of elephants, marking the DSCP bits is sufficient, and the physical infrastructure can be configured to respond appropriately.

Another approach we’re exploring takes a more global view.  The idea is for each vSwitch to expose its elephants along with throughput metrics and duration.  With that information, an SDN controller for the virtual edge can identify the heaviest hitters network wide, and then signal them to the physical network for special handling.  Currently, we’re looking at exposing this information within an OVSDB column.

Are Elephants Obfuscated by the Overlay?

No.  For modern overlays, flow-level information, and QoS marking are all available in the outer header and are directly visible to the underlay physical fabric.  Elephant identification can exploit this characteristic.

Going Forward

This is a very exciting area for us.  We believe there is a lot of room to bring to bear edge understanding of workloads, and the ability for  software at the edge to do sophisticated trending analysis to the problem of elephant detection and handling.  It’s early days yet, but our initial forays both with customers and hardware partners, has been very encouraging.

More to come.


Network Virtualization Gets Physical

Network virtualization, as others have noted, is now well past the hype stage and in serious production deployments. One factor that has facilitated the adoption of network virtualization is the ease with which it can be incrementally deployed. In a typical data center, the necessary infrastructure is already in place. Servers are interconnected by a physical network that already meets the basic requirements for network virtualization: providing IP connectivity between the physical servers. And the servers are themselves virtualized, providing the ideal insertion point for network virtualization: the vswitch (virtual switch). Because the vswitch is the first hop in the data path for every packet that enters or leaves a VM, it’s the natural place to implement the data plane for network virtualization. This is the approach taken by VMware (and by Nicira before we were part of VMware) to enable network virtualization, and it forms the basis for our current deployments.

In typical data centers, however, not every machine is virtualized. “Bare metal” servers — that is, unvirtualized, or physical machines — are a fact of life in most real data centers. Sometimes they are present because they run software that is not easily virtualized, or because of performance concerns (e.g. highly latency-sensitive applications), or just because there are users of the data center who haven’t felt the need to virtualize. How do we accommodate these bare-metal workloads in virtualized networks if there is no vswitch sitting inside the machine?

Our solution to this issue was to develop gateway capabilities that allow physical devices to be connected to virtual networks. One class of gateway that we’ve been using for a while is a software appliance. It runs on standard x86 hardware and contains an instance of Open vSwitch. Under the control of the NSX controller, it maps physical ports, and VLANs on those ports, to logical networks, so that any physical device can participate in a given logical network, communicating with the VMs that are also connected to that logical network. This is illustrated below.

NSX Gateway

An NSX gateway connects physical devices to virtual networks

As an aside, these gateways also address another common use case: traffic that enters and leaves the data center, or “north-south” traffic. The basic functionality is similar enough: the gateway maps traffic from a physical port (in this case, a port connected to a WAN router rather than a server) to logical networks, and vice versa.

Software gateways are a great solution for moderate amounts of physical-to-virtual traffic, but there are inevitably some scenarios where the volume of traffic is too high for a single x86-based appliance, or even a handful of them. Say you had a rack (or more) full of bare-metal database servers and you wanted to connect them to logical networks containing VMs running application and web tiers for multi-tier applications. Ideally you’d like a high-density and high-throughput device that could bridge the traffic to and from the physical servers into the logical networks. This is where hardware gateways enter the picture.

Leveraging VXLAN-capable Switches

Fortunately, there is an emerging class of hardware switch that is readily adaptable to this gateway use case. Switches from several vendors are now becoming available with the ability to terminate VXLAN tunnels. (We’ll call these switches VTEPs — VXLAN Tunnel End Points or, more precisely, hardware VTEPs.) VXLAN tunnel termination addresses the data plane aspects of mapping traffic from the physical world to the virtual. However, there is also a need for a control plane mechanism by which the NSX controller can tell the VTEP everything it needs to know to connect its physical ports to virtual networks. Broadly speaking, this means:

  • providing the VTEP with information about the VXLAN tunnels that instantiate a particular logical network (such as the Virtual Network Identifier and destination IP addresses of the tunnels);
  • providing mappings between the MAC addresses of VMs and specific VXLAN tunnels (so the VTEP knows how to forward packets to a given VM);
  • instructing the VTEP as to which physical ports should be connected to which logical networks.

In return, the VTEP needs to tell the NSX controller what it knows about the physical world — specifically, the physical MAC addresses of devices that it can reach on its physical ports.

There may be other information to be exchanged between the controller and the VTEP to offer more capabilities, but this covers the basics. This information exchange can be viewed as the synchronization of two copies of a database, one of which resides in the controller and one of which is in the VTEP. The NSX controller already implements a database access protocol, OVSDB, for the purposes of configuring and monitoring Open vSwitch instances. We decided to leverage this existing protocol for control of third party VTEPs as well. We designed a new database schema to convey the information outlined above; the OVSDB protocol and the database code are unchanged. That choice has proven very helpful to our hardware partners, as they have been able to leverage the open source implementation of the OVSDB server and client libraries.

The upshot of this work is that we can now build virtual networks that connect relatively large numbers of physical ports to virtual ports, using essentially the same operational model for any type of port, virtual or physical. The NSX controller exposes a northbound API by which physical ports can be attached to logical switches. Virtual ports of VMs are attached in much the same way to build logical networks that span the physical and virtual worlds. The figure below illustrates the approach.

Hardware VTEP

A hardware VTEP controlled by an NSX controller

It’s worth noting that this approach has no need for IP multicast in the physical network, and limits the use of flooding within the overlay network. This contrasts with some early VXLAN implementations (and the original VXLAN Internet Draft, which didn’t entirely decouple the data plane from the control plane). The reason we are able to avoid flooding in many cases is that the NSX controller knows the location of all the VMs that it has attached to logical networks — this information is provided by the vswitches to the controller. And the controller shares its knowledge with the hardware VTEPs via OVSDB. Hence, any traffic destined for a VM can be placed on the right VXLAN tunnel from the outset.

In the virtual-to-physical direction, it’s only necessary to flood a packet if there is more than one hardware VTEP. (If there is only one hardware VTEP, we can assume that any unknown destination must be a physical device attached to the VTEP, since we know where all the VMs are). In this case, we use the NSX Service Node to replicate the packet to all the hardware VTEPs (but not to any of the vswitches). Furthermore, once a given hardware VTEP learns about a physical MAC address on one of its ports, it writes that information to the database, so there will be no need to flood subsequent packets. The net result is that the amount of flooded traffic is quite limited.

For a more detailed discussion of the role of VXLAN in the broader landscape of network virtualization, take a look at our earlier post on the topic.

Hardware or Software?

Of course, there are trade-offs between hardware and software gateways. In the NSX software gateway, there is quite a rich set of features that can be enabled (port security, port mirroring, QoS features, etc.) and the set will continue to grow over time. Similarly, the feature set that the hardware VTEPs support, and which NSX can control, will evolve over time. One of the challenges here is that hardware cycles are relatively long. On top of that, we’d like to provide a consistent model across many different hardware platforms, but there are inevitably differences across the various VTEP implementations. For example, we’re currently working with a number of different top-of-rack switch vendors. Some partners use their own custom ASICs for forwarding, while others use merchant silicon from the leading switch silicon vendors. Hence, there is quite a bit of diversity in the features that the hardware can support.

Our expectation is that software gateways will provide the greatest flexibility, feature-richness, and speed of evolution. Hardware VTEPs will be the preferred choice for deployments requiring greater total throughput and density.

A final note: one of the important benefits of network virtualization is that it decouples networking functions from the physical networking hardware. At first glance, it might seem that the use of hardware VTEPs is a step backwards, since we’re depending on specific hardware capabilities to interconnect the physical and virtual worlds. But by developing an approach that works across a wide variety of hardware devices and vendors, we’ve actually managed to achieve a good level of decoupling between the functionality that NSX provides and the hardware that underpins it. And as long as there are significant numbers of physical workloads that need to be managed in the data center, it will be attractive to have a range of options for connecting those workloads to virtual networks.


Visibility, Debugging and Network Virtualization (Part 1)

[This post was written by Martin Casado and Amar Padmanahban, with helpful input from Scott Lowe, Bruce Davie, and T. Sridhar]

This is the first in a multi-part discussion on visibility and debugging in networks that provide network virtualization, and specifically in the case where virtualization is implemented using edge overlays.

In this post, we’re primarily going to cover some background, including current challenges to visibility and debugging in virtual data centers, and how the abstractions provided by virtual networking provide a foundation for addressing them.

The macro point is that much of the difficulty in visibility and troubleshooting in today’s environments is due to the lack of consistent abstractions that both provide an aggregate view of distributed state and hide unnecessary complexity. And that network virtualization not only provides virtual abstractions that can be used to directly address many of the most pressing issues, but also provides a global view that can greatly aid in troubleshooting and debugging the physical network as well.

A Messy State of Affairs

While it’s common to blame server virtualization for complicating network visibility and troubleshooting, this isn’t entirely accurate. It is quite possible to build a static virtual datacenter and, assuming the vSwitch provides sufficient visibility and control (which they have for years), the properties are very similar to physical networks. Even if VM mobility is allowed, simple distributed switching will keep counters and ACLs consistent.

A more defensible position is that server virtualization encourages behavior that greatly complicates visibility and debugging of networks. This is primarily seen as server virtualization gives way to full datacenter virtualization and, as a result, various forms of network virtualization are creeping in. However, this is often implemented as a collection of disparate (and distributed) mechanisms, without exposing simplified, unified abstractions for monitoring and debugging. And the result of introducing a new layer of indirection without the proper abstractions is, as one would expect, chaos. Our point here is not that network virtualization creates this chaos – as we’ll show below, the reverse can be true, provided one pays attention to the creation of useful abstractions as part of the network virtualization solution.

Let’s consider some of the common visibility issues that can arise. Network virtualization is generally implemented with a tag (for segmentation) or tunneling (introducing a new address space), and this can hide traffic, confuse analysis on end-to-end reachability, and cause double counting (or undercounting) of bytes or even packets. Further, the edge understanding of the tag may change over time, and any network traces collected would therefore become stale unless also updated. Often logically grouped VMs, like those of a single application or tenant, are scattered throughout the datacenter (not necessarily on the same VLAN), and there isn’t any network-visible identifier that signifies the grouping. For example, it can be hard to say something like “mirror all traffic associated with tenant A”, or “how many bytes has tenant A sent”. Similarly, ACLs and other state affecting reachability is distributed across multiple locations (source, destination, vswitches, pswitches, etc.) and can be difficult to analyze in aggregate. Overlapping address spaces, and dynamically assigned IP addresses, preclude any simplistic IP-based monitoring schemes. And of course, dynamic provisioning, random VM placement, and VM mobility can all make matters worse.

Yes, there are solutions to many of these issues, but in aggregate, they can present a real hurdle to smooth operations, billing and troubleshooting in public and private data centers. Fortunately, it doesn’t have to be this way.

Life Becomes Easy When the Abstractions are Right

So much of computer science falls into place when the right abstractions are used. Servers provide a good example of this. Compute virtualization has been around in pieces since the introducing of the operating system. Memory, IO, and the instruction sets have long been virtualized and provide the basis of modern multi-process systems. However, until the popularization of the virtual machine abstraction, these virtualization primitives did not greatly impact the operations of servers themselves. This is because there was no inclusive abstraction that represented a full server (a basic unit of operations in an IT shop). With virtual machines, all state associated with a workload is represented holistically, allowing us to create, destroy, save, introspect, track, clone, modify, limit, etc. Visibility and monitoring in multi-user environments arguably became easier as well. Independent of which applications and operating systems are installed, it’s possible to know exactly how much memory, I/O and CPU a virtual machine is consuming, and that is generally attributed back to a user.

So is it with network virtualization – the virtual network abstraction can provide many of the same benefits as the virtual machine abstraction. However, it also provides an additional benefit that isn’t so commonly enjoyed with server virtualization: network virtualization provides an aggregated view of distributed state. With manual distributed state management being one of the most pressing operational challenges in today’s data centers, this is a significant win.

To illustrate this, we’ll provide a quick primer on network virtualization and then go through an example of visibility and debugging in a network virtualization environment.

Network Virtualization as it Pertains to Visibility and Monitoring

Network virtualization, like server virtualization, exposes a a virtual network that looks like a physical network, but has the operational model of a virtual machine. Virtual networks (if implemented completely) support L2-L7 network functions, complex virtual topologies, counters, and management interfaces. The particular implementation of network virtualization we’ll be discussing is edge overlays, in which the mechanism used to introduce the address space for the virtual domain is an L2 over L3 tunnel mesh terminated at the edge (likely the vswitch). However, the point of this particular post is not to focus on the how the network virtualization is implemented, but rather, how decoupling the logical view from the physical transport affects visibility and troubleshooting.

A virtual network (in most modern implementations, at least) is a logically centralized entity. Consequently, it can be monitored and managed much like a single physical switch. Rx/Tx counters can be monitored to determine usage. ACL counters can be read to determine if something is being dropped due to policy configuration. Mirroring of a virtual switch can siphon off traffic from an entire virtual network independent of where the VMs are or what physical network is being used in the datacenter. And of course, all of this is kept consistent independent of VM mobility or even changes to the physical network.

The introduction of a logically centralized virtual network abstraction addresses many of the problems found in todays virtualized data centers. The virtualization of counters can be used for billing and accounting without worrying about VM movements, the hiding or double counting of traffic, the distribution of VMs and network services across the datacenter. The virtualization of security configuration (e.g. ACLs) and their counters turns a messy distributed state problem into a familiar central rule set. In fact, in the next post, we’ll describe how we use this aggregate view to perform header space analysis to answer sophisticated reachability questions over state which would traditionally be littered throughout the datacenter. The virtualization of management interfaces natively provides accurate, multi-tenant support of existing tool chains (e.g. NetFlow, SNMP, sFlow, etc.), and also resolves the problem of state gathering when VMs are dispersed across a datacenter.

Impact On Workflow

However, as with virtual machines, while some level of sanity has been restored to the environment, the number of entities to monitor has increased. Instead of having to divine what is going on in a single, distributed, dynamic, complex network, there are now multiple, much simpler (relatively static) networks that must be monitored. These network are (a) the physical network, which now only needs to be concerned with packet transport (and thus has become vastly simpler) and (b) the various logical networks implemented on top of it.

In essence, visibility and trouble shooting now much take into account the new logical layer. Fortunately, because virtualization doesn’t change the basic abstractions, existing tools can be used. However, as with the introduction of any virtual layer, there will be times when the mapping of physical resources to virtual ones becomes essential.

We’ll use troubleshooting as an example. Let’s assume that VM A can’t talk to VM B. The steps one takes to determine what goes on are as follows:

  1. Existing tools are pointed to the effected virtual network and rx/tx counters are inspected as well as any ACLs and forwarding rules. If something in the virtual abstraction is dropping the packets (like an ACL), we know what the problem is, and we’re done.
  2. If it looks like nothing in the virtual space is dropping the packet, it becomes a physical network troubleshooting problem. The virtual network can now reveal the relevant physical network and paths to monitor. In fact, often this process can be fully automated (as we’ll describe in the next post). In the system we work on, for example, often you can detect which links in the physical network packets are being dropped on (or where some amount of packet loss is occurring) solely from the edge.

A number of network visibility, management, and root cause detection tools are already undergoing the changes needed to make this a one step process form the operators view. However, it is important to understand what’s going on, under the covers.

Wrapping Up for Now

This post was really aimed at teeing up the topic on visibility and debugging in a virtual network environment. In the next point, we’ll go through a specific example of an edge overlay network virtualization solution, and how it provides visibility of the virtual networks, and advanced troubleshooting of the physical network.  In future posts, we’ll also cover tool chains that are already being adapted to take advantage of the visibility and troubleshooting gains possible with network virtualization.


Scale, SDN, and Network Virtualization

[This post was put together by Teemu Koponen, Andrew Lambeth, Rajiv Ramanathan, and Martin Casado]

Scale has been an active (and often contentious) topic in the discourse around SDN (and by SDN we refer to the traditional definition) long before the term was coined. Criticism of the work that lead to SDN argued that changing the model of the control plane from anything but full distribution would lead to scalability challenges. Later arguments reasoned that SDN results in *more* scalable network designs because there is no longer the need to flood the entire network state in order to create a global view at each switch. Finally, there is the common concern that calls into question the scalability of using traditional SDN (a la OpenFlow) to control physical switches due to forwarding table limits.

However, while there has been a lot of talk, there have been relatively few real-world examples to back up the rhetoric. Most arguments appeal to reason, argue (sometimes convincingly) from first principles, or point to related but ultimately different systems.

The goal of this post is to add to the discourse by presenting some scaling data, taken over a two-year period, from a production network virtualization solution that uses an SDN approach. Up front, we don’t want to overstate the significance of this post as it only covers a tiny sliver of the design space. However, it does provide insight into a real system, and that’s always an interesting centerpiece around which to hold a conversation.

Of course, under the broadest of terms, an SDN approach can have the same scaling properties as traditional networking. For example, there is no reason that controllers can’t run traditional routing protocols between them. However, a more useful line of inquiry is around the scaling properties of a system built using an SDN approach that actually benefits from the architecture, and scaling properties of an SDN system that differs from the traditional approach. We briefly touch both of these topics in the discussion below.

The System

The system we’ll be describing underlies the network virtualization platform described here. The core system has been in development for 4-5 years, has been in production for over 2 years, and has been deployed in many different environments.

A Scale Graph

By scale, we’re simply referring to the number of elements (nodes, links, end points, etc.) that a system can handle without negatively impacting runtime (e.g. change in the topology, controller failure, software upgrade, etc.). In the context of network virtualization, the elements under control that we focus on are virtual switches, physical ports, and virtual ports. Below is a graph of the scale numbers for virtual ports and hypervisors under control that we’ve validated over the last two years for one particular use case.

Scale Graph

The Y axis to the left is the number of logical ports (ports on logical switches), the Y axis on the right is the number of hypervisors (and therefore virtual switches) under control. We assume that the average number of logical ports per logical switch is constant (in this case 4), although varying that is clearly another interesting metric worth tracking. Of course, these results are in no way exhaustive, as they only reflect one use case that we commonly see in the field. Different configurations will likely have different numbers.

Some additional information about the graph:

  • For comparison, the physical analog of this would be 100,000 servers (end hosts), 5,000 ToR switches, 25,000 VLANs and all the fabric ports that connect these ToR switches together.
  • The gains in scale from Jan 2011 to Jan 2013 were all done with by improving the scaling properties of a single node. That is, rather than adding more resources by adding controller nodes, the engineering team continued to optimize the existing implementation (data structures, algorithms, language specific overhead, etc,.). However, the controllers were being run as a cluster during that time so they were still incurring the full overhead of consensus and data replication.
  • The gains shown for the last two datapoints were only from distribution (adding resources), without any changes to the core scaling properties of a single node. In this case, moving from 3 to 4 and finally 5 nodes.

Discussion

Raw scaling numbers are rarely interesting as they vary by use case, and the underlying server hardware running the controllers. What we do find interesting, though, is the relative increase in performance over time. In both cases, the increase in scale grows significantly as more nodes are added to the cluster, and as the implementation is tuned and improved.

It’s also interesting to note what the scaling bottlenecks are. While most of the discussion around SDN has focused on fundamental limits of the architecture, we have not found this be a significant contributor either way. That is, at this point we’ve not run into any architectural scaling limitations; rather, what we’ve seen are implementation shortcomings (e.g. inefficient code, inefficient scheduling, bugs) and difficulty in verification of very large networks. In fact, we believe there is significant architectural headroom to continue scaling at a similar pace.

SDN vs. Traditional Protocols

One benefit of SDN that we’ve not seen widely discussed is its ability to enable rapid evolution of solutions to address network scaling issues, especially in comparison to slow-moving standards bodies and multi-year ASIC design/development cycles. This has allowed us to continually modify our system to improve scale while still providing strong consistency guarantees, which are very important for our particular application space.

It’s easy to point out examples in traditional networking where this would be beneficial but isn’t practical in short time periods. For example, consider traditional link state routing. Generally, the topology is assumed to be unknown; for every link change event, the topology database is flooded throughout the network. However, in most environments, the topology is fixed or is slow changing and easily discoverable via some other mechanism. In such environments, the static topology can be distributed to all nodes, and then during link change events only link change data needs to be passed around rather than passing around megs of link state database. Changing this behavior would likely require a change to the RFC. Changes to the RFC, though, would require common agreement amongst all parties, and traditionally results in years of work by a very political standardization process.

For our system, however, as our understanding for the problem grows we’re able to evolve not only the algorithms and data structures used, but the distribution model (which is reflected by the last two points in the graph) and the amount of shared information.

Of course, the tradeoff for this flexibility is that the protocol used between the controllers is no longer standard. Indeed, the cluster looks much more like a tightly coupled distributed software system than a loose collection of nodes. This is why standard interfaces around SDN applications are so important. For network virtualization this would be the northbound side (e.g. Quantum), the southbound side (e.g. ovsdb-conf), and federation between controller clusters.

Parting Thoughts

This is only a snapshot of a very complex topic. The point of this post is not to highlight the scale of a particular system—clearly a moving target—but rather to provide some insight into the scaling properties of a real application built using an SDN approach.


Network Virtualization: Delivering the Promise of SDN

Two weeks ago I gave a short presentation at the Open Networking Summit. With only 15 minutes allocated per speaker, I wasn’t sure I’d be able to make much of an impact. However, there has been a lot of reaction to the talk – much of it positive – so I’m posting the slides here and including them below. A video of the presentation is also available in the ONS video archive (free registration required).

Ons 2013-nv from Bruce Davie

What Should Networks Do For Applications?

[This post was written by JR Rivers, Bruce Davie, and Martin Casado]

One of the important characteristics of network virtualization is the decoupling of network services from the underlying physical network. That decoupling is fundamental to the definition of network virtualization: it’s the delivery of network services independent of a physical network that makes those services virtual. Furthermore, many of the benefits of virtualization – such as the ability to move network services along with the workloads that need those services, without touching hardware – follow directly from this decoupling.

In spite of all the benefits that flow from decoupling virtual networks from the underlying physical network, we occasionally hear the concern that something has been lost by not having more direct interaction with the physical network. Indeed, we’ve come across a common intuition that applications would somehow be better off if they could directly control what the physical network is doing. The goal of this post is to explain why we disagree with this view.

It’s worth noting that this idea of getting networks to do something special for certain applications is hardly a novel idea. Consider the history of Voice over IP as an example. It wasn’t that long ago when using Ethernet for phone calls was a research project. Advances in the capacity of both the end points as well as the underlying physical network changed all of that and today VOIP is broadly utilized by consumers and enterprises around the world. Let’s break down the architecture that enabled VOIP.

A call starts with end-points (VOIP phones and computers) interacting with a controller that provisions the connection between them. In this case, provisioning involves authenticating end-points, finding other end-points, and ringing the other end. This process creates a logical connection between the end-points that overlays the physical network(s) that connect them. From there, communication occurs directly between the end-points. The breakthroughs that allowed Voice Over IP were a) ubiquitous end-points with the capacity to encode voice and communicate via IP and b) physical networks with enough capacity to connect the end-points while still carrying their normal workload.

Now, does VOIP need anything special from the network itself? Back in the 1990s, many people believed that to enable VOIP it would be necessary to signal into the network to request bandwidth for each call. Both ATM signalling and RSVP (the Resource Reservation Protocol) were proposed to address this problem. But by the time VOIP really started to gain traction, network bandwidth was becoming so abundant that these explicit communication methods between the endpoints and the network proved un-necessary. Some simple marking of VOIP packets to ensure that they didn’t encounter long queues on bottleneck links was all that was needed in the QoS department. Intelligent behavior at the end-points (such as adaptive bit-rate codecs) made the solution even more robust. Today, of course, you can make a VOIP call between continents without any knowledge of the underlying network.

These same principles have been applied to more interactive use cases such as web-based video conferencing, interactive gaming, tweeting, you name it. The majority of the ways that people interact electronically are based on two fundamental premises: a logical connection between two or more end-points and a high capacity IP network fabric.

Returning to the context of network virtualization, IP fabrics allow system architects to build highly scalable physical networks; the summarization properties of IP and its routing protocols allow the connection of thousands of endpoints without imposing the knowledge of each one on the core of the network. This both reduces the complexity (and cost) of the networking elements, and improves their ability to heal in the event that something goes wrong. IP networks readily support large sets of equal cost paths between end-points, allowing administrators to simultaneously add capacity and redundancy. Path selection can be based on a variety of techniques such as statistical selection (hashing of headers), Valiant Load Balancing, and automated identification of “elephant” flows.

Is anything lost if applications don’t interact directly with the network forwarding elements? In theory, perhaps, an application might be able to get a path that is more well-suited to its precise bandwidth needs if it could talk to the network. In practice, a well-provisioned IP network with rich multipath capabilities is robust, effective, and simple. Indeed, it’s been proven that multipath load-balancing can get very close to optimal utilization, even when the traffic matrix is unknown (which is the normal case). So it’s hard to argue that the additional complexity of providing explicit communication mechanisms for applications to signal their needs to the physical network are worth the cost. In fact, we’ll argue in a future post that trying to carefully engineer traffic is counter-productive in data centers because the traffic patterns are so unpredictable. Combine this with the benefits of decoupling the network services from the physical fabric, and it’s clear that a virtualization overlay on top of a well-provisioned IP network is a great fit for the modern data center.


Tunneling for Network Virtualization

[This post was written by Bruce Davie and Martin Casado.]

With the growth of interest in network virtualization, there has been a tendency to focus on the encapuslations that are required to tunnel packets across the physical infrastructure, sometimes neglecting the fact that tunneling is just one (small) part of an overall architecture for network virtualization. Since this post is going to do just that – talk about tunnel encapsulations – we want to reiterate the point that a complete network virtualization solution is about much more than a tunnel encapsulation. It entails (at least) a control plane, a management plane, and a set of new abstractions for networking, all of which aim to change the operational model of networks from the traditional, physical model. We’ve written about these aspects of network virtualization before (e.g., here).

In this post, however, we do want to talk about tunneling encapsulations, for reasons that will probably be readily apparent. There is more than one viable encapsulation in the marketplace now, and that will be the case for some time to come. Does it make any difference which one is used? In our opinion, it does, but it’s not a simple beauty contest in which one encaps will be declared the winner. We will explore some of the tradeoffs in this post.

There are three main encapsulation formats that have been proposed for network virtualization: VXLAN, NVGRE, and STT. We’ll focus on VXLAN and STT here. Not only are they the two that VMware supports (now that Nicira is part of VMware) but they also represent two quite distinct points in the design space, each of which has its merits.

One of the salient advantages of VXLAN is that it’s gained traction with a solid number of vendors in a relatively short period. There were demonstrations of several vendors’ implementations at the recent VMworld event. It fills an important market need, by providing a straightforward way to encapsulate Layer 2 payloads such that the logical semantics of a LAN can be provided among virtual machines without concern for the limitations of physical layer 2 networks. For example, a VXLAN can provide logical L2 semantics among machines spread across a large data center network, without requiring the physical network to provide arbitrarily large L2 segments.

At the risk of stating the obvious, the fact that VXLAN has been implemented by multiple vendors makes it an ideal choice for multi-vendor deployments. But we should be clear what “multi-vendor” means in this case. Network virtualization entails tunneling packets through the data center routers and switches, and those devices only forward based on the outer header of the tunnel – a plain old IP (or MAC header). So the entities that need to terminate tunnels for network virtualization are the ones that we are concerned about here.

In many virtualized data center deployments, most of the traffic flows from VM to VM (“east-west” traffic) in which case the tunnels are terminated in vswitches. It is very rare for those vswitches to be from different vendors, so in this case, one might not be so concerned about multi-vendor support for the tunnel encaps. Other issues, such as efficiency and ability to evolve quickly might be more important, as we’ll discuss below.

Of course, there are plenty of cases where traffic doesn’t just flow east-west. It might need to go out of the data center to the Internet (or some other WAN), i.e. “north-south”. It might also need to be sent to some sort of appliance such as a load balancer, firewall, intrusion detection system, etc. And there are also plenty of cases where a tunnel does need to be terminated on a switch or router, such as to connect non-virtualized workloads to the virtualized network. In all of these cases, we’re clearly likely to run into multi-vendor situations for tunnel termination. Hence the need for a common, stable, and straightfoward approach to tunneling among all those devices.

Now, getting back to server-server traffic, why wouldn’t you just use VXLAN? One clear reason is efficiency, as we’ve discussed here. Since tunneling between hypervisors is required for network virtualization, it’s essential that tunneling not impose too high an overhead in terms of CPU load and network throughput. STT was designed with those goals in mind and performs very well on those dimensions using today’s commodity NICs. Given the general lack of multi-vendor issues when tunneling between hypervisors, STT’s significant performance advantage makes it a better fit in this scenario.

The performance advantage of STT may be viewed as somewhat temporary – it’s a result of STT’s ability to leverage TCP segmentation offload (TSO) in today’s NICs. Given the rise in importance of tunneling, and the momentum behind VXLAN, it reasonable to expect that a new generation of NICs will emerge that allow other tunnel encapsulations to be used without disabling TSO. When that happens, performance differences between STT and VXLAN should (mostly) disappear, given appropriate software to leverage the new NICs.

Another factor that comes into play when tunneling traffic from server to server is that we may want to change the semantics of the encapsualution from time to time as new features and capabilities make their way into the network virtualization platform. Indeed, one of overall advantages of network virtualization is the ease with which the capabilities of the network can be upgraded over time, since they are all implemented in software that is completely independent of the underlying hardware. To make the most of this potential for new feature deployment, it’s helpful to have a tunnel encaps with fields that can be modified as required by new capabilities. An encaps that typically operates between the vswitches of a single vendor (like STT) can meet this goal, while an encaps designed to facilitate multi-vendor scenarios (like VXLAN) needs to have the meaning of every header field pretty well nailed down.

So, where does that leave us? In essence, with two good solutions for tunneling, each of which meets a subset of the total needs of the market, but which can be used side-by-side with no ill effect. Consequently, we believe that VXLAN will continue to be a good solution for the multi-vendor environments that often occur in data center deployments, while STT will, for at least a couple of years, be the best approach for hypervisor-to-hypervisor tunnels. A complete network virtualization solution will need to use both encapsulations. There’s nothing wrong with that – building tunnels of the correct encapsulation type can be handled by the controller, without the need for user involvement. And, of course, we need to remember that a complete solution is about much more than just the bits on the wire.


Follow

Get every new post delivered to your Inbox.

Join 413 other followers