What Should Networks Do For Applications?

[This post was written by JR Rivers, Bruce Davie, and Martin Casado]

One of the important characteristics of network virtualization is the decoupling of network services from the underlying physical network. That decoupling is fundamental to the definition of network virtualization: it’s the delivery of network services independent of a physical network that makes those services virtual. Furthermore, many of the benefits of virtualization – such as the ability to move network services along with the workloads that need those services, without touching hardware – follow directly from this decoupling.

In spite of all the benefits that flow from decoupling virtual networks from the underlying physical network, we occasionally hear the concern that something has been lost by not having more direct interaction with the physical network. Indeed, we’ve come across a common intuition that applications would somehow be better off if they could directly control what the physical network is doing. The goal of this post is to explain why we disagree with this view.

It’s worth noting that this idea of getting networks to do something special for certain applications is hardly a novel idea. Consider the history of Voice over IP as an example. It wasn’t that long ago when using Ethernet for phone calls was a research project. Advances in the capacity of both the end points as well as the underlying physical network changed all of that and today VOIP is broadly utilized by consumers and enterprises around the world. Let’s break down the architecture that enabled VOIP.

A call starts with end-points (VOIP phones and computers) interacting with a controller that provisions the connection between them. In this case, provisioning involves authenticating end-points, finding other end-points, and ringing the other end. This process creates a logical connection between the end-points that overlays the physical network(s) that connect them. From there, communication occurs directly between the end-points. The breakthroughs that allowed Voice Over IP were a) ubiquitous end-points with the capacity to encode voice and communicate via IP and b) physical networks with enough capacity to connect the end-points while still carrying their normal workload.

Now, does VOIP need anything special from the network itself? Back in the 1990s, many people believed that to enable VOIP it would be necessary to signal into the network to request bandwidth for each call. Both ATM signalling and RSVP (the Resource Reservation Protocol) were proposed to address this problem. But by the time VOIP really started to gain traction, network bandwidth was becoming so abundant that these explicit communication methods between the endpoints and the network proved un-necessary. Some simple marking of VOIP packets to ensure that they didn’t encounter long queues on bottleneck links was all that was needed in the QoS department. Intelligent behavior at the end-points (such as adaptive bit-rate codecs) made the solution even more robust. Today, of course, you can make a VOIP call between continents without any knowledge of the underlying network.

These same principles have been applied to more interactive use cases such as web-based video conferencing, interactive gaming, tweeting, you name it. The majority of the ways that people interact electronically are based on two fundamental premises: a logical connection between two or more end-points and a high capacity IP network fabric.

Returning to the context of network virtualization, IP fabrics allow system architects to build highly scalable physical networks; the summarization properties of IP and its routing protocols allow the connection of thousands of endpoints without imposing the knowledge of each one on the core of the network. This both reduces the complexity (and cost) of the networking elements, and improves their ability to heal in the event that something goes wrong. IP networks readily support large sets of equal cost paths between end-points, allowing administrators to simultaneously add capacity and redundancy. Path selection can be based on a variety of techniques such as statistical selection (hashing of headers), Valiant Load Balancing, and automated identification of “elephant” flows.

Is anything lost if applications don’t interact directly with the network forwarding elements? In theory, perhaps, an application might be able to get a path that is more well-suited to its precise bandwidth needs if it could talk to the network. In practice, a well-provisioned IP network with rich multipath capabilities is robust, effective, and simple. Indeed, it’s been proven that multipath load-balancing can get very close to optimal utilization, even when the traffic matrix is unknown (which is the normal case). So it’s hard to argue that the additional complexity of providing explicit communication mechanisms for applications to signal their needs to the physical network are worth the cost. In fact, we’ll argue in a future post that trying to carefully engineer traffic is counter-productive in data centers because the traffic patterns are so unpredictable. Combine this with the benefits of decoupling the network services from the physical fabric, and it’s clear that a virtualization overlay on top of a well-provisioned IP network is a great fit for the modern data center.


21 Comments on “What Should Networks Do For Applications?”

  1. Peter Phaal says:

    The paper makes good arguments for keeping the physical network simple and limiting the control that applications have over the physical network. How about flipping the question around and asking, “What should applications do for networks?”

    Data center matrices tend to be sparse and highly structured: for example, traffic within the group of virtual machines belonging to a tenant is much greater than traffic between different tenants. A location aware scheduler can reduce network loads, improve application performance, and increase scaleability by placing nodes that communicate together topologically close to each other. For example, the Hadoop scheduler assigns compute tasks to nodes that are close to the storage they are going to operate on, and orchestrates storage replication in order to minimise non-local transfers.

    The problem with current network virtualization architectures is that they hide network locality and performance information from schedulers, denying them the opportunity to make intelligent workload placement decisions.

    http://blog.sflow.com/2013/03/pragmatic-software-defined-networking.html

  2. I believe that the network and application should work in a complimentary fashion, not mutually exclusive. Regardless of your opinion on that topic the network does need to provide instrumentation to allow for capacity planning and rapid problem resolution / isolation. Being able to application visibility and point in time monitoring of both hardware forwarding state and link utilization are critical when looking into anomalies that can manifest at the application layer. Having the flexibility and instrumentation built into the network infrastructure while not technically a requirement has decided benefits…

    • Andrew Lambeth says:

      Absolutely agree that troubleshooting and telemetry are critical. One thing I think is going to be really helpful in doing a better job with these than in the past is that network virtualization is all about creating explicit mappings between what the end user or application cares about (logical topology) and what’s actually been provisioned and configured to sling the bits around (physical topology). Since we are establishing and maintaining all these explicit mappings in the network virtualization software layer, we have a much better way to ask the right questions of the physical topology. The physical fabrics that are being developed now in response to the move to network virtualization will have the ability to maintain stats, faults, and other info in a context that is relevant to the logical topology. Then we can build rich drill-down query interfaces that start at a level that makes sense to the end user (or more likely the operator investigating an issue reported by a specific end user)

  3. Reuben says:

    Loose coupling vis-a-vis ‘decoupling’ ?

  4. I agree with Mark that the application and the network are not mutually exclusive. Even in the VoIP example quoted there is coupling between the application and the network (IP end points). Network virtualization will remove the direct coupling but it still has dependencies at overlay layer to factor changes in the physical network.

    Secondly, the optimization metric used by the physical network (constrained/non-constrained shortest path) might not be applicable for every use case. Some applications might be fine taking the slowest path at certains points in time. For these use cases, if the network can provide a abstracted way to signal certain types of traffic for the applications, then we have the most optimal abstraction between the network and the applications.

    In the end, if a application is suffering from poor application throughput, there needs to be a way to correlate application, overlay and physical network state.

  5. Paul Gleichauf says:

    Over provisioned networks that can operate without regard for constraints in bandwidth, latency, and scale have thus far proven to be the simplest idealized solution. When they are physically realizable, they may take a while to arrive, and when they do it is most likely in greenfield deployments. Network technologies typically are driven by and lag behind changing edge device capabilities because they are supporting infrastructure, not drivers.

    The hard problem is coping with the transitional period when the network has not yet evolved into the next cycle and while it is highly constrained in some combination of the aforementioned three dimensions. That’s when one is falls back upon application aware behaviors, overlays and specializations that may break traffic-type-agnostic transport.

    When there are fundamental constraints, such as the speed of light or Shannon limited channel capacity, then traffic aware behaviors are necessarily programmed into the network. In general constraints may require network-based applications, as compared to edge-based applications, to monitor and engineer traffic behaviors. The IP protocol has sufficed to date. It’s interesting to speculate when that might no longer be the case. Proving that a virtualized network can afford to be partially or completely ignorant of physical characteristics beyond those implicitly contained in today’s IP protocol would seem to require some assumptions and attendant tradeoffs.

    One benefit of virtualized over provisioned networks, perhaps implied in your second to last paragraph, is the ability to isolate distinct configurations that share the same physical infrastructure. This can enable the trial of a new configuration before shifting higher priority traffic to it, and potentially dynamically adjusting resource allocations in the process. It is a very nice way to increase network stability. It is also a bit hard to tell whether to call that kind of dynamic configuration change a decoupling or a coupling of the virtual and the physical by your definition.

  6. Pascal says:

    I disagree. If you have ever deployed Unified Communications (UC) for Real-Time-Communications (RTC) you would understand that UC has a great dependency on the E-2-E QoS and the Traffic Engineering (TE) of the network. It very common that even if an enterprise configures QoS and TE perfectly, configuration drift (ie change) comes into play that leaves most UC support people chasing calls from users complaining of poor quality. In reality most UC poor quality calls are a result from the network not being provisioned correctly for QOS and TE. Please don’t get me wrong as most modern UC codecs do have adaptive bit rate capabilities and handles a lot of congestion conditions in the network but it can only do so much and do not recover well on back-to-back packet loss greater then 3-4 packets.

    It even gets even better in that with UC it is very typical these days for users to be using smart phones, laptops and tablets which allow the user to be mobile via Wi-Fi. This double whammy means you cannot secure voice calls with VLANs anymore as protecting the EF queues with special VLANS goes out the door when it comes to a UC client spitting out voice, video and data packets from the same interface. Couple this with Wi-Fi being a half-duplex and shared medium, the dependency of QoS is a must. However most Wi-Fi deployments do not configure QOS correctly either on the UC client, server or somewhere in the physical infrastructure QOS markings gets stripped off or remarked.

    The bottom line is we need to reduce the complexity of operating networks for mission critical applications like UC and SDN is the prefect technology wave to address this issue. UC applications talking to networks is the perfect scenario to removing complexity and reducing operations cost. Isn’t that the main benefit of SDN?

  7. Jim Fenton says:

    Your statement, “Back in the 1990s, people believed…” implies that you think they were wrong. But the environment in the 1990s was different, bandwidth wasn’t nearly as plentiful, so technologies like RSVP and ATM signaling may have been more appropriate than they seem now, especially from the perspective of the data center. Unfortunately, many of these technologies were seen as counter to Network Neutrality or to the business interests of access providers who would rather sell more capacity (when they’re able to sell that capacity, that is).

    Even today, I feel like we have developed ways to be more tolerant of compression, often at the expense of user experience. The poor experience with voice quality on a hosted VoIP service my company had at its previous location prompted us to get PRI tunks at our new location so we sound professional. I’m frequently annoyed with the speedups I hear on Skype as it recovers from congestion; it can be misinterpreted as the speaker rushing the conversation. Why can’t the VoIP experience be better fidelity than POTS connections? Services that do deliver high quality, like Cisco telepresence, require a lot of provisioning to make sure that they get the bandwidth they need.

    Adding more paths will have the effect of balancing flows, but that isn’t an option for the Last Mile, where many endpoints have only one path available and are frequently subject to congestion, either from neighbors or from someone in the house watching an HD movie. In the datacenter, by all means add capacity to avoid congestion entirely. Just don’t forget about the places in the network where that isn’t possible.

    • drbruced says:

      I think it’s worth clarifying that I wasn’t setting out to dismiss QoS altogether, but rather to point out that the aspirations of letting applications make requests from network devices haven’t tended to play out – and you mention some additional reasons that they have not (such as the desire of operators to sell capacity rather than signalled services – could that be because it’s easier for them to deliver capacity than to handle the complexity of signalling?)

      As for the last mile, it’s still hard to see how some sort of signalling to the network is going to do better than DSCP marking. By all means you may want your HD movie to take precedence over the file-sharing application that your teenager is running, but either DSCP marking, or adaptive behavior by the apps, or both, seems adequate to handle that.

      • Yiannis says:

        Bruce,

        As a home user, I’d rather have my preferences stored on a controller’s state, instead of requiring each application and device to mark DSCP bits.. Hopefully this will be applied both at my home’s router and the ISP’s headend. This also works regardless of whether the traffic is inbound or outbound, and so I don’t have to depend on the other end (or other ISPs to the path) to set and respect the DSCP marking.

  8. Martin Casado says:

    There are many good points in the comments. However, I think it’s worth pointing out that this discussion was primarily focused on the datacenter, not the campus, or the wiring closet, etc. Clearly as traffic becomes more regular due to aggregation, and the bandwidth becomes heavily oversubscribed, as is the case outside the datacenter, TE is necessary. However, within the datacenter, with over-provisioning and aggressive multi-pathing, the distribution will be very close to optimal independent of the traffic matrix. We will attempt to show this more formally in a follow on post.

  9. Paul Gleichauf says:

    At the heart of “What Should Networks Do For Applications” is an argument for the decoupling of network services from the physical network across which they are transported. The argument reads as if this decoupling should be a divorce, rather than a loosening: a separation. Let me make a case for the latter. Consider data center security. The question is what the range of security models needs to be for both risk and regulatory reasons and whether in these cases the protection mechanisms should couple the physical with the virtual based upon current best security practices. The contemporary answer likely depends on sliding scales of risks and requirements.

    First, consider a single tenant data center with a uniform internal security model protected at the physical boundary to the external world, generally at routers. This is the single-cell “eggshell” model where firewalls inspect traffic at the boundary and authentication of the services provided is typically done using HTTPS securing public URLs (identifiers) and secrets (keys) stored under the best case in protected hardware. Within the data center the services are often left unprotected and unauthenticated because all the devices and services, virtual and physical, are presumed equally trusted and part of the same trust domain. Even this assumption can be questioned in many cases.

    Second, consider the case of a multi tenancy data center. Here we cannot assume that the protection domains are solely at the edge of the physical data center. We basically have to treat the tenants as being in their own sub-data centers, each protected from information leaking into one another. The data center now fractures into a multicellular organism, with protected identifiers between them.

    In the specific case of network devices one concern is whether a device has been spoofed, for example a switch that is apparently like all the legitimate ones, but is sharing traffic either across tenants or even sending it out of the data center to external parties. In the case of hypervisors and VM’s the question is whether they have been authenticated to come from a legitimate source before being activated. One can make additional arguments for wanting to know the identify of the many virtual machines offering services ranging from networking to storage to compute and links, including location based optimizations and failure tracking, and in each case one has to decide how important it is and how much effort one can afford to do so securely.

    The best security practices have traditionally anchored the identifiers and keys to physical properties. Location-based provisioning is useful to be able to send a technician to find the malfunctioning hardware that might be afflicting the errant VM. Or we might want to lower latency by migrating a VM to another location. Cryptographically authenticated hypervisor installs and VM golden images can also be required to make sure information is only being processed by entities authorized to see it. Storage of keys in software is less secure than in hardware. Encrypted data at rest makes this even more complicated and necessary. Therefore isn’t it true that a good (not even the best) security practices may require the coupling the physical network to virtualized services, including networks-based security ones?

    • Martin Casado says:

      Hi Paul. You bring up very valid points, and largely I agree with you. But to clarify the discussion a bit, the primary argument of the post is simply that an overprovisioned L3 fabric with multipathing is near optimal in terms of utilization, and we suggest that explicit communication between the application and the fabric are unlikely to provide any benefits.

      This shouldn’t be a surprising position to take. Many modern datacenters use exactly this approach, with an L3 leaf/spine fabric for the network, and security, isolation, billing, etc. implemented in the ADC, the web servers or the application. This doesn’t suggest that there can’t be a strong binding between logical entities, or logical and physical entities.

      • Paul Gleichauf says:

        Martin,

        I find overprovisioning and the use of multipath quite compelling and simplifying, so no argument there. When I read, and now re-read, the original article I interpreted the emphasis to be on the decoupling of applications from the physical network. That seemed a very idealistic perspective, one that even classical IP networks have trouble honoring without layer violation exceptions.

        • Paul Gleichauf says:

          It is interesting that this thread has avoided mentioning Open Flow, which as I recall in its original form was about being able to craft experimental protocols that could enable applications to control a campus network. You know better than I if I have this right. Today Open Flow seems to have pivoted to be a mechanism for data center management.. Some of us resisted that notion at its inception because among other reasons less relevant to this thread there was no mechanism provided to balance competing demands from many simultaneous application demands for limited resources. If things are sufficiently over provisioned, one can demand almost anything from the network. Yet Open Flow would appear to have some challenges in responding to and monitoring a dynamically changing network, among other issues. You and your coauthors are right that the cool thing about IP over multiple paths in the data center is that it can achieve near optimal utilization (and it can do so without resorting to experimental protocols).

          • Martin Casado says:

            OpenFlow was more about the control plane than the applications. The original work came out of an earlier project which used a single, centralized reference monitor to perform admission and access controls (per flow) of every user and device on the network. With centralization (or limited distribution) you could show strong consistency, and it wasn’t clear how a distributed approach could support end to end policies with client and server mobility and service chaining on arbitrary topologies.

            I very much share your skepticism on having applications control the network, through Openflow or otherwise.

          • Paul Gleichauf says:

            Martin,
            I may have been either too subtle or too abstract in making my point about constraints and that may have led you to believe that I am skeptical about application “influence” of the network.

            As a side note, I also probably should also have added power as a ever more frequent and stringent constraint on data center network designs. When there are constraints that make it challenging to have a sufficiently provisioned network (to use your term), then there can be benefits to allowing applications to signal their resource needs, which may be false (related to security) or wrong, and the network may find it needs to balance. We already collectively know some of the complications of traffic engineering, but it is feasible at the cost of complexity. It appears that some significant fraction of major customers want to deliberately constrain their networks (in particular their power consumption) and still have high utilization. They also appear to require a graceful transition across hybrid networks while doing so. I am not going to argue that this is the best choice for anyone to make, but it is an interesting soluble engineering problem/requirement that I cannot reject out-of-hand.

            Paul

        • Martin Casado says:

          Yeah, on a reread I see why you came to that conclusion. We could have been more clear on the specific focus.

  10. manfred says:

    Maybe you should read this NoJitter blog about an HP and Microsoft Demo how applications and network control can dynamically alter network performance and operations:

    http://www.nojitter.com/post/240153039/hp-and-microsoft-demo-openflowlync-applicationsoptimized-network

  11. Lennie says:

    When I see people discus about overprovisining or QoS I always think of this talk:

    https://ripe65.ripe.net/presentations/67-2012-09-25-qos.pdf

    https://ripe65.ripe.net/archives/video/3/

  12. manfred says:

    Lennie, thanks for the pointer…but seems like you missed the point of Geoff Huston’s presentation at Ripe 65. He is arguing against QoS on the PUBLIC Internet, which for the most part I concure and doesn’t make a lot of sense.

    However, here is a quote from Geoff Houston (Jun 25, 2012): “I hope I made the point that I was looking at QoS in the public provider space, not the enterprise. In the enterprise world there can be a single domain of control and a single set of policies and even a single set of client priorities that allow things that are not possible in the public space. Sometimes this distinction is just too subtle and folk can be confused by what is possible in a private or enterprise environment and what is feasible in the public space.”

    QoS within the Enterprise is not about redistributing bandwidth, but rather about prioritizing real-time sensitive traffic to reduce latency and mitigate packet loss (due to micro-burst or Instantaneous buffer congestion).

    There are a bunch of excuses for not doing QoS in the Enterprise and none of them are valid:

    http://www.nojitter.com/post/240147232/qos-in-the-lan-youre-kidding

    Worth noting is that QoS is commonly deployed in many/most Enterprise networks and works quite well. The main issue is that configuring QoS consitently is complex and difficult, which is something that we are actively working to solve using Software Defined Networking (SDN).

    Here is a video of an SDN demo for Lync:


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 413 other followers