An Attempt to Motivate and Clarify Software-Defined Networking (SDN)

Scott’s keynote at Ericsson research on SDN.  I really encourage anyone who is interested in OpenFlow and/or SDN to view it.   It is, in my opinion, the cleanest justification for SDN, and appropriately articulates where OpenFlow fits in the broader context (… as a minor mechanism that is capable, but generally unimportant).


10 Comments on “An Attempt to Motivate and Clarify Software-Defined Networking (SDN)”

  1. Pranav says:

    but what problem is SDN solving? the claim seems to be this will be help networks evolve without specifying the exact benefit

    • SDN doesn’t solve problems. It is a architecture which lends well to creating higher-level abstractions which someone can use to solve problems. Do higher-level languages solve problems? Not unless applied. Do they make solving problems easier? Absolutely.

      • I definitely agree. SDN is not an end in itself. And it’s not a new idea. SDN makes networking problems boil down to software design problems. And that’s great! But we haven’t yet approached the important question: which problems are information services networks supposed to solve in their environment?

        I think that we are evolving from a “dumb” to an “integrated services” network model similar to that of telecom networks, and the problems to be solved have become nowadays very different. For now we limit to ad-hoc compositions of basic network services that people already use: autoconfiguration (DHCP, etc.), access control, NAT, basic QoS, HTTP caching, CDN, etc. But we have no idea what services organizations would need in the near future once we have an open control plane. Until we know which problems we will need to solve for organizations, we won’t know which services, and thus which abstractions, will be needed in information services networks. That’s the primary question we need to answer. SDN is just a part of architecting the solution.

        IMHO, the more profound problem in networking nowadays is that people addressing problems in information services networks completely ignore communication services networks. I’m surprised that Scott doesn’t mention NGN, TINA, IN, or Parlay in his talk, although such architectures and abstractions should be an inspiration to build new networks for information services, or they can teach us important lessons. Especially since we are moving to a similar “integrated services” network model with similar high-level abstractions. Notably, the telecom world teaches us that we need to start from the business / enterprise viewpoint to identify where and how organizations need to interact to provide services, which has implications on the whole network architecture. I think the very first thing we networking people should do now is learn from what has been done in the telecom world for the last 20 years.

        • Vytautas Valancius says:

          The problems that SDN could solve are real.

          Just one example: Most of the ISP networks run TE to balance the traffic across the backbone. Today TE depends on edge routers to consult their IGP databases and to compute the path with available capacity. The algorithm for this computation, it’s knobs and features are up to a router vendor (and there are only two such vendors) to implement and roll out.

          There is a possibility, of course, with current MPLS-TE products to dial-in the path manually at the edge router. Well, it’s not that easy to automate it. ISP has to write wrappers for CLI. What’s worse, committing config to a Juniper router with 30000 lines of configuration can take ages for consitency checks to pass. In other words, CLI does not scale.

          Here is an example of a knob that I know what ISP wanted: When an existing low priority TE path is preempted with higer priority path, Juniper would compute the alternate path with the original, not with the latest estimated capacity. The ISP, however, wanted their preempted paths to be computed with the latest capacities. I think they might be still waiting for that future to roll out…

  2. FullMesh says:

    Besides the fancy label, how is SDN different from centralized routing/intelligence? Haven’t we been here before — for example Newbridge/46020? I suppose the centralized systems of 10+ years ago used similar constructs as Openflow to program network paths. What were the reasons the market decided to move towards distributed routing/intelligence and what is different now that we should go back to the centralized model?

    • I would say the difference is that SDN is about abstractions and building a general, horizontal platform.

      That said, I think your comments suggest two misconceptions:

      First, SDN-like approaches are used for products today. Some wireless products, all virtual distributed networking solutions, and some modern fabric offerings have a split architecture. However, the platforms are often single purpose (like older centralized routing solutions) and don’t provide high-level abstractions nor reusable primitives. And certainly there is no horizontal ecosystem so that I can run my application on your controller using vendor X’s hardware.

      Second, there is nothing “centralized” about SDN. While you can build a single-controller solution, it certainly isn’t dictated by the architecture. The largest SDN networks networks I know of today run with distributed controllers.

      • FullMesh says:

        Regarding misconceptions –

        We currently do deploy our services using your provisioning system on vendor X’s hardware. Sounds similar? And with yet another proliferation of software, controllers and hardware I don’t think the openflow ecosystem is going to make anything simpler or easier.

        Distributed controllers are just breaking the centralized model into smaller chunks. At the other end of the spectrum we can say that each node has it’s own controller accessible via a cli that my application can program (configure) to enable a desired behavior.

        Do the multi-controller SDN networks peer their controllers together to allow interconnecting controller domains? If so what lessons have been [re]learned? My understanding is that the large non-university/lab SDN networks operate on overlay model, not end-to-end openflow based paths.

        • I don’t understand your initial point. Having clear layers with open and well defined programmatic interfaces above and below (think x86 and POSIX) is just good system design. It decouples hardware from platform from apps allowing each to evolve independently. Also, decoupling the distribution model from the physical buildout makes the system developers life much easier.

          If I were a large company, and I wanted to build my own networking technology, for whatever reason, I would prefer a standard hardware interface over a proprietary one.

          There are many traditional OpenFlow deployments in production in industry that do not use OverLays. For example: http://m.itworldcanada.com/story.aspx?id=143759

  3. Nillo says:

    Thanks for sharing Martin, very educational.

    I am a nobody, but I believe it is easy to simplify things saying “and then building forwarding elements” … fact is that “forwarding elements” aren’t easy to build, have limitations in size and capacity which aren’t standard, and don’t always fail clean. I love the idea to have an abstraction that puts ACL, VLANs, PBR on a container and let upper software use this and have a network hypervisor apply the container to the “forwarding elements as needed” … but I will love to see the flow hash be compiled and put into hardware TCAMs only to see “oh, it did not fit” on point X of th epath, then re-route, then again … etc. Just to mention one point where I see this break.

    All in all, I am still a fan of SDN and following with interest, but I think people should not overlook or oversimplify why we are where we are.

    I think it is ironic that one of the things which seem to “stress” networking so much that we think of re-thinking them is virtualization. So many VMs … network’s can’t scale, VM-mobility … but then, why do we use VMs at all? Shouldn’t we be capable of doing smarter cloud applications? Why do we need to build 10,000 times the same OS inside a container for people who will run on top the same applications? Maybe we should be looking to simplify things elsewhere, not just networks … Hypervisors are, to me, a (necessary) mistake.

    • Nillo, I very much appreciate your perspective. Network virtualization is really hard because, at the end of the day, you’re dealing with finite forwarding resources. And the one thing that virtualization has traditionally compromised on is guaranteed performance, which you can’t do in networking.

      I wouldn’t go as far as to say the hypervisors are a mistake. We will continue to build smarter and smarter applications in which the network “virtualization” is done by the load balancer or the application. However, until this full transformation happens, we have decades worth of legacy software to deal with.

      I’m a big fan of virtualize first, and then change abstractions.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 415 other followers