Openflow is the answer! (now .. what was the question again?)

Wow, over the last couple of weeks there has been an escalation in the confusion around Openflow. The problem is on both sides of the metaphorical isle.   Those behind the hype engine are spinning out of control with fanciful tales about bending the laws of physics (“Openflow will keep the icecream in your refrigerator cold during a power outage!”). And those opposed to OpenFlow (for whatever reason) are using the hype to build non-existant strawmen to attack (“Openflow cannot bend the laws of physics, therefore it isn’t useful”).

So for this post, I figured I’d go through some of the dubious claims and bogus strawmen and try and beat some sense into the discussion.  Each of the claims I talk to below were pulled from some article/blog/interview I ran into over the last few weeks.

Claim: “Openflow provides a more powerful interface for managing the network than exists today”

Patently false. APIs expose functionality, not create it. Openflow is a subset of what switching chipsets offer today. If you want more power, sign an NDA with Broadcom and write to their SDK directly.

What Openflow does attempt to do is provide a standard interface above the box. If you believe that SDN is a powerful concept (and I do), then you do need some interface that is sufficiently expressive and hopefully widely supported.

This is almost not worth saying, but clearly Openflow is more expressive than a CLI. CLIs are fine for manual configuration, but they totally blow for building automated systems. The shortcomings are blindingly obvious, for example traditional CLIs have no clear data schema or state semantics and they change all the time because the designers are trying to solve an HCI not an API versioning problem. However, I don’t think there is any real disagreement on this point.

Claim: “Openflow will make hardware simpler”

I highly doubt this. Even with Openflow, a practical network deployment needs a bunch of stuff. It needs to do lookups over common headers (minimally L2/L3/L4), it may need hash-based multi-pathing, it may need port groups, it may need tunneling, it may need fine-grained link timers. If you grab a chip from Broadcom, I’m not exactly sure what you’d throw away if using Openflow.

What Openflow may discourage is stupid shit like creating new tags as an excuse for hardware stickyness (“you want new feature X? It’s implemented using the Suxen tag(tm) and only our new chip supports it.”). This is because an Openflow-like interface can  effectively flatten layers. For example, I don’t have to use the MPLs tag for MPLS per se. I can use it for network virtualization (for instance) or for identifying aggregates that correspond to the same security group. However, that doesn’t mean hardware is simpler. Just that the design isn’t redundant to fulfill a business need. (Post facto note:  There are some great points regarding this issue in the comments.  We’ll work to spin this out into a separate post.)

Or more to the point. There are an awful lot of fields and bits in a header. And an awful lot of lookup capacity in a switch. If you shuffle around what fields mean what, you can almost certainly do what you want without having to change how the hardware parses the packet.

Claim: “Openflow will commoditize the hardware layer”

This is total Kaiser Soze-esque nonsense (yup, that’s the picture at the top of the post). To begin with, networking hardware is already on its way to horizontal integration. The reason that Arista is successful has nothing to do with Openflow (they strictly don’t support it and never have), it has nothing to do with SDN, it’s because Ken Duda, and Andy Bechtolsheim are total bad-asses and have built a great product. Oh, and because merchant silicon has come of age, meaning you don’t need need an ASIC team to be successful in the market.

The difference between OpenFlow and what Arista supports is that with Openflow you choose to build to an industry standard interface, and not a proprietary one. Whether or not you care is, of course, totally up to you.

So Openflow does provide another layer of horizontal integration and once the ecosystem develops that it is, in my opinion, a very good thing. But the ecosystem is still embryonic, so it will take some time before the benefits can be realized.

I think the power of merchant silicon and the rise of the “commodity” fabric are a far greater threat to the crusty scale-up network model. Oddly, Openflow has become a relatively significant distraction. That said, as this area matures, creating a horizontal layer “above the box” will grow in significance.

Claim: “Openflow does not map efficiently to existing hardware”

True! Older versions of Openflow did not map well to existing silicon chips. This is a *major* focus of the existing design effort; to make Openflow more flexible. As with any design effort, the trade-off is between having a future proof roadmap, and having a practical design that can have tangible benefits now. So we’re trying to thread the needle practically but with some foresight. It will take a little patience to see how this evolves.

Claim: “Openflow reduces complexity”

This is a meaningless statement. Openflow is like USB, it doesn’t “do” anything new. A case can be made for SDN to reduce complexity, but it does this by constraining the distribution model and providing a development platform that doesn’t suck. No magic there.

Claim: “Openflow will obviate the need for traditional distributed routing protocols”

I hear this a lot, but I just don’t buy it. There are certainly those within the Openflow community who disagree with me, so please take this for what it’s worth (very little …)

In my opinion, traditional distributed protocols are very good at what they do, scaling, routing packets in complex graphs, converging, etc. If I were to build a fabric, I’d sure as hell do it with an distributed routing protocol (again, not everyone agrees on this point). What traditional protocols suck at is distributing all the other state in the network. What state is that? If you look at the state in a switching chip, a lot of it isn’t traditionally populated through distributed protocols. For example, the ACL tables, the tunnels, many types of tags (e.g. VLAN), etc.

Going forward, it may be the case the distributed routing protocols get pulled into the controller. Why? Because controllers have much higher cpu density and therefore can run the computation for multiple switches. A multicore server will kick the crap out of the standard switch management CPU. Like I described in a previous post, this is no different than the evolution from distance vector to link state, it’s just taking it a step further. However, the protocol certainly is still distributed.

Claim: “Openflow cannot scale because of X”

I’ve addressed this at length in a previous post.  Still, I’m going to have to call Doug Gourlay out.  And I quote …

“I honestly don’t think it’s going to work,” Arista’s Gourlay said. “The flow setup rates on Stanford’s largest production OpenFlow network are 500 flows per second. We’ve had to deal with networks with a million flows being set up in seconds. I’m not sure it’s going to scale up to that.”

Other than being a basic logical fallacy (Stanford’s largest production Openflow network has absolutely nothing to do with the scaling properties of the Openflow or SDN architecture),  there appears to be an implicit assumption that flow-setups are in some way a limiting resource.  This clearly isn’t the case if they don’t leave the datapath which is a valid (and popular) deployment model for Openflow.  Doug’s a great guy, at a great company, but this is a careless statement, and it doesn’t help further the dialog.

Claim: “Openflow helps virtualize the network”

Again, this is almost a meaningless statement. A compiler helps virtualize the network if it is being used to write code to that effect. The fact is, the pioneers of network virtualization, such as VMWare, Amazon, and Cisco, don’t use Openflow.

So yes, you can use Openflow for virtualization (that’s a help, I guess … ). And open standards are a good thing. But no, you certainly don’t need to.

OK, that’s enough for now.

To be very clear, I’m a huge fan of Openflow. And I’m a huge fan of SDN.  Yet, neither is a panacea, and neither is a system or even a solution. One is an effort to provide a standardized interface for building awesome systems (Openflow). The other is a philosophical model for how to build those systems (SDN). It will be up to the systems themselves to validate the approach.


12 Comments on “Openflow is the answer! (now .. what was the question again?)”

  1. This is absolutely utterly fantastic. Should become a mandatory reading before anyone is allowed to write or utter “OpenFlow” in public.

    On a side note, it’s nice to see we’re 110% in sync (we might both be totally wrong, but I’m an eternal optimist ;).

  2. Juan Lage says:

    Thanks for another great post. I appreciate you being very pragmatic and trying to be objective (being a huge fan of OpenFlow and SDN :-)

    Putting forward that I work for Cisco (12 years with the company already …)

    I see contradiction in your arguments. On the one hand, you highlight OpenFlow in that it can prevent companies (Cisco) from building new ASICs which leverage new tagging mechanisms for performing new networking functions. Instead, OpenFlow approach would allow you to repurpose an existing field supported by existing hardware (i.e. use MPLS label for other purposes). Possible.

    But then on the other hand, you state that “with Openflow you choose to build to an industry standard interface”. Well, the interface may (in time) become standard, but if you re-purpose the use of a network field, you become proprietary by definition.

    I suppose yes, with an OpenFlow network you could decide to re-purpose, say, the MPLS label field for deploying some sort of service ID for visualization. But in so doing you also make your network not interoperable with any other. You need to write yourself whatever gateway features to interact with other networks. Can you do that? Yes. But then why is it a stupid shit when Cisco (or other vendors) do it?

    Reason why new tags are applied is because new functions are required, and re-purposing existing fields isn’t always an option. In any case, re-purposing existing fields arbitrarily breaks interoperability.

    • OK, this one I care a lot about. So I want to make absolutely sure I’m thinking clear about it, and that what I’m saying is factual. I believe that there is an incredible amount of lookup capacity and decision space in existing ASICs. And I believe that it is through the control path (i.e. the ability to populate the field) not the framing format that compatibility is maintained. Remember, you have to think of my as a system designer (e.g. the internal networking team of a cloud), and not an operator.

      Comments are not the correct medium to hash through this. How about we do the following. We collaborate on a future blog post in which we work through specific examples. I’ll describe how you can achieve the same functionality with existing pipelines, and how having a proprietary control protocol strictly limits compatibility and openness. You can then pick out where my thinking is misguided, or just incorrect.. We wont post until both of us are satisfied that what we have written down is both correct and fair. OK?

      • You’re both partially wrong. OK?

        Details: Martin is absolutely right in saying most of the things that get re-invented today could be implemented with existing frame formats. I wrote about use of MPLS in DC/virtualization environments (and there’s also MAC-in-MAC). Cisco’s decision to invent Qbh seems somewhat arbitrary … but you know that a new generation of engineers always has to invent a new sexy shiny toy of their own. One couldn’t possibly consider using 10+ years old technology (MPLS) to solve today’s problems, that would be oh-so-totally wrong.

        However, where I get nervous is the “we’ll use existing frame formats to mean whatever” part. Somehow it reminds me of Microsoft’s “embrace & extend” approach to standards. You could do that with MPLS, because it was explicitly designed to be extensible (labels have no inherent meaning; they are whatever the control protocol says they are), but not with other formats (like Q-in-Q or MAC-in-MAC). You want to reuse those, you’d be well advised to do so by going through proper procedures (yeah, I know, some of SDN proponents hate standard bodies).

        With the “let’s forget the standards bodies and old protocols, all we need is OpenFlow (USB)” approach, I can’t help but see the quagmire we had so many times in the past (zillions of OS, e-mail standards, SQL languages …). Of course every OS, e-mail, database or OpenFlow controller vendor will tell you it doesn’t matter because they control all servers/messages/data/network, but we’ve learned our lessons in three out of four mentioned areas already. Do we really have to start from scratch in the fourth one?

      • Juan Lage says:

        I am trying not to think of this in any canned way, so not thinking of it from the mind of the network operator nor the system designer, but both if possible ;-)

        Agree that this topic requires more than comments on a blog and your approach sounds great to me. Not sure I can keep up with you on such a work, but happy to give it a try! :-)

  3. James Liao says:

    It is kind of interesting that the biggest supporters of OpenFlow (and SDN) have to consistently pull the fans back to reality. Nice blog.

    • Yeah, it’s not clear to me that it is the fans or the trade press. I can imagine the press finding a shiny new widget and then focusing on the sensational (rather than the realistic). And then responding to their own zeal. Honestly, I never quite understand the source of this stuff.

      • Rob Sherwood says:

        Interesting data point; a (technically inclined) colleague sent me a link to your blog post with the message that I needed to go here and “defend OpenFlow”. Reading what you say, I have to agree that you’re, as usual, right on the money and that there is some dangerous hype that needs to be deflated. IMHO, there are plenty of actual good things about OpenFlow/SDN to go around … so much so that people don’t have to go conflating them with world peace, cute puppies, or other forms of panacea.

        • Thanks for the comment Rob. You, as much as anyone (err actually more than most) know that great systems can be built with Openflow. So I very much agree with you, there is no reason to morph it into something it is not. Lets focus on building great technology, and be very honest that the tools we’re using are just that. Tools. There is a reason that “miracle products” are generally only found in infomercials .. :)

  4. Saar says:

    You mean it won’t make coffee ??

  5. [...] Openflow is the answer! (now .. what was the question again?) « Network Heresy. This entry was posted in Networking and tagged openflow. Bookmark the permalink. ← [...]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 414 other followers