4 Types of Port Channels and When They’re Used

The other day I was catching up on recorded content from Cisco Live! and I saw mention of yet another implementation of port channels (this time called Enhanced Virtual Port Channels). I thought it would make a good blog entry to describe the differences of each, where they are used, and what platforms each is supported on.

The Plain Old Port Channel

Back in the day, before all the different flavors of port channels came to be, there was the original port channel. This is also called an Etherchannel or a Link Aggregation Group (LAG) depending on what documentation or vendor you’re dealing with. This port channel uses Link Aggregation and Control Protocol (LACP) or in the Cisco world, could also use Port Aggregation Protocol (PAgP) to signal the establishment of the channel between two devices. A port channel does a couple of things:

  1. Increases the available bandwidth between two devices.
  2. Creates one logical path out of multiple physical paths. Since Spanning Tree Protocol (STP) runs on the logical link and not the physical links, all of the physical links can be in a forwarding state.

Now, with respect to #1, it’s important to differentiate between bandwidth and throughput. If we have two switches connected together with (1) 1Gb link, our bandwidth is 1Gb and so is our maximum throughput. If we cable up more links so that we have (4) 1Gb links and put them into a port channel, our bandwidth is now 4Gb but our max throughput is still 1Gb. Why is that?

When a switch is sending frames across a port channel it has to use some sort of algorithm to determine which physical link to send the frames on. The algorithm ensures a distribution of traffic across all the links. Now in order to ensure in-order frame delivery and not cause an increase in jitter, etc, it’s general practice to put all frames from the same sender or between the same sender/receiver pair on the same link.

Since all frames between sender and receiver will always be on the same 1Gb link, that results in a maximum throughput of only 1Gb.

Plain old port channels can be used between most Ethernet switches and many server NIC drivers support LACP as well for providing a port channel between a server and its upstream switch. Note that with plain port channels, each physical link that makes up one end of the port channel must terminate on the same switch. You cannot, for example, have a port channel from a server to two upstream switches. There is a type of port channel that does support this called Multi-Chassis Etherchannel (MEC) or Multi-Chassis Link Aggregation Group (MLAG). When a technology such as Cisco’s Stackwise or Virtual Switching System (VSS) is employed in the network, it’s possible to form a MEC between a single device (like a server) and two distinct upstream switches, as long as the switches are participating in the same Stackwise stack or VSS.

Virtual Port Channel

A Virtual Port Channel (vPC) is an enhancement to MEC that allows for the same sort of functionality — building a port channel across two switches — without the need for Stackwise or VSS.

Both Stackwise and VSS go far beyond just port channels: They actually combine the control and management planes of the member switches effectively turning them into one logical entity. This doesn’t sound too bad when you consider you’ve gone from (2) touch points down to just (1) but there can be some increased risk at the same time. Most notably you’ve now got a single instance of your control protocol processes (STP, OSPF, BGP, DTP, LACP, etc) running on the (logical) switch. A software bug or protocol error will cause the protocol to hiccup across the whole stack/VSS. Next, since the management plane is also combined, it means there’s one common configuration file. A fat finger of a config line or a change control gone bad could have serious consequences since it will affect both physical switches in the stack/VSS.

Now having said all that, it really highlights the benefits of vPC: the control and management planes remain separate! It’s really the best of both words. You can build a cross-chassis port channel without taking on the risk of combining the control and management plane.

Note the special links connecting the two vPC member switches together. They are used to carry control, configuration, and keepalive data between the two. As shown, since a vPC uses LACP for signalling (it could also use no signalling and just be manually configured), a virtual port channel can connect to regular Ethernet switches as well as servers that support regular port channels.

vPC is currently available on (as of July, 2015):

  • Nexus 7700 and 7000
  • Nexus 5000, 5500, and 5600 series switches
  • Nexus 9500 and 9300 series switches running in standalone NX-OS mode
    • Check the software release notes on the Nexus 9k switches for any caveats or restrictions with respect to vPC. Don’t assume that vPC on the 9k has feature parity with vPC on the 7k or 5k.

There is a major topology restriction with vPC when using the Nexus 2000 series Fabric Extender (FEX) in conjunction with the 5×00: you cannot configure a dual-layer vPC as shown in the diagram below.

“Dual layer” is a reference to the vPC configured between the 5500 and the FEXes and then from the FEXes to the end device. With vPC, you have to do one or the other. That said, if you read the EvPC section below you’ll see that there is a way to run this topology successfully.

Virtual Port Channel Plus

Where vPC was an enhancement to MEC, vPC+ is, naturally, an enhancement to vPC.

vPC+ is used in a FabricPath domain where a virtual port channel is configured from fabric-member switches towards some non-fabric device such as a server or what’s termed a “classic” Ethernet (CE) switch. vPC+ allows a non-fabric device to connect to two fabric switches for redundancy and multi-pathing purposes and also allows traffic to be load balanced on egress of the fabric towards the non fabric device.

The picture shows the physical and logical representations of a vPC+ connection. Physically, the CE switch is cabled to two independent FabricPath switches. Those two switches are in a vPC domain. The magic of vPC+ is that the member switches logically instantiate a virtual switch (SwitchID 4) which sits between them and the CE switch. This logical switch is advertised to the rest of the FabricPath network as being the edge switch where the CE switch is connected. This is the key to enabling load balancing of traffic egressing the fabric towards the CE switch. In the sample network above, in order to reach the CE switch, Switch 1 will address its FabricPath frames to Switch 4 for which it has (2) equal-cost paths. If the logical topology mirrored the physical topology, Switch 1 would have to address its frames to either Switch 2 or Switch 3 (but not both because the FabricPath forwarding table can only hold a single switch ID for a given destination) which would result in no load balancing.

Enhanced Virtual Port Channel

Another spin on vPC! Ok, this one addresses the limitation explained in the vPC section above where you cannot have a dual-layer vPC between Nexus 5500s and the FEXes and between the FEXes and the end device. EvPC allows that exact topology. In addition, EvPC also maintains SAN A/B separation even though there is a full mesh of connectivity.

Between the Nexus 5500s and the FEXes, the FCoE storage traffic maintains that traditional “air gap” between the A and B sides and never crosses onto a common network element. This is accomplished through explicit configuration on the 5500s which tells the left FEX to only send FCoE traffic to the left 5500 and right FEX only to the right 5500. The regular Ethernet traffic can traverse any of the links anywhere in the topology.

The reason for the difference between vPC and EvPC is due to limitations in the Nexus 5000 platform (ie the 5010 and 5020) that prevent it from supporting EvPC. The Nexus 5500 platform supports EvPC and does not require any specific FEX model to make it work.

Update July 23 2015: the Nexus 5600 also supports EvPC.


Disclaimer: The opinions and information expressed in this blog article are my own and not necessarily those of Cisco Systems.

36 thoughts on “4 Types of Port Channels and When They’re Used”

  1. I am the one who configured vpc in my network, and i never saw the post like this which explained vpc in depth as here, cheers a big thanks for this article

  2. Can you go into a bit more detail surrounding item #1 on the traditional LACP LAG? Is there ever a scenario where a traditional port channel will be able to provide greater throughput than the speed of a single link while sending traffic between two hosts? Is there anything in the Cisco world that makes this possible?

    1. Hi Ben,

      I’m not aware of any Cisco platform that can do per-packet load balancing on LAGs and that’s the only scenario I can think of that would give you more throughput for a single flow.

      The only hardware I know of that can do this is some Brocade switches. It’s their own proprietary technology and came out of their strong background in storage networking. Take a look at section #4 on this posting: https://www.packetmischief.ca/2012/05/23/five-features-of-brocade-vcs/

  3. Hi,

    good stuff :)

    Something is not clear though. I did once one project with N5K and N2K. I ran VPC between N5K and N2K, and also between N2K and end hosts.

    As far as I remember (more than 2 years ago), I could do this because of a new NXOS release (before it was not possible). I struggeled a bit to have it running, but it worked. It was not FCoE, just Ethernet.

    Am I dreaming?

    Thanks everyone!!

    1. Hi Josu,

      You’re describing EvPC. When doing dual-layer vPC — regardless if you’re using FCoE — you have to use the EvPC feature.

  4. Thanks for the outstanding explanation! Just a quick question. With port-channels, our bandwidth remains the same as the link speed (4 1GB ports equate to 1GB bandwidth in a port-channel still), does that change with vPC? Since I’m terminating to two different switches, and I can share the load between the two switches, is it possible to see my bandwidth with vPC increase to 2GB on a vPC setup?

    1. Hey Ehsan,

      Sames rules apply on vPC :) The switches use a hashing algorithm for determining which link of the bundle to put the packet onto. Packets belonging to the same flow will always go onto the same physical link.

    1. Hey Joe,

      I don’t see why not. It wouldn’t be a vPC though, it’d just be a regular etherchannel configured on the n5k with member ports on the two FEXes. Please let me know if I’m out to lunch and you’re seeing something else!

  5. In regards to etherchannel on two FEXs on the same 5K..

    At least on our 7K (6.2(12)), this is not allowed. When you attempt to add a port on a say FEX 102 to the etherchannel (which already has FEX 101 members), you get:

    command failed: port not compatible [Members in multiple FEX]

  6. Can you do regular LAG on ports that belong to different sups on the same router ? Is there special configuration got that ? Or it is same as doing LAG for ports that are part of the same sup (board)


  7. Hi Mike, If I have two Layer 2 wan links (pseudowires) between data center A and data center B, could I use a cisco 4948 on both sides and create a port-channel between them?
    Are there many CONS of this implementation?

    1. Hi Lucy,

      I’ve never actually done what you’re asking so take my advice with that in mind. My top two thoughts are these:

      1. I would suggest you want to make sure that your circuit will pass LCAP BPDUs end to end. You say pseudowires so I’ll assume you mean something like EoMPLS. I’m reasonably sure that will pass LCAP, but would try it out myself first. And of course afterwards, enable LCAP (in active mode) on the port channel.

      2. You ideally want the provider’s CPEs to drop their ports when the pw is broken. In other words, if end-to-end connectivity is lost along the pw, you want the provider to signal your switches that the circuit is no longer good. The ideal way for them to do that in this situation is to drop the port on their CPEs so that your switches are triggered into action and can remove that link from the port channel. The alternative is to rely on the LCAP messaging which will be much slower and will result in traffic black holing until the LCAP dead timer is hit.

  8. I am not 100% dedicated on networking side but every time i need to freshen up my memory about this topic i come to this website, saved it in my favorites..

    thanks a lot for putting it together and sharing!!

  9. Hi Joel,

    Thanks for this post !

    I need a clarification, where if a member link fails, what will happen to the traffic already sent over that link ? Is there any mechanism to notify the upper layer about the loss and ask it to resend ? How this link failure will be handled for data traffic and control traffic ?


    1. Hi Mohamed,

      Thanks for the feedback :) Your questions are excellent. Knowing the answers to these questions will help us understand why port channels are a really valuable tool when it comes to fast network convergence. It will also help us understand exactly what happens to traffic on the network when a member link fails (from a data plane and control plane perspective). I actually used to think that a member link failure had no impact on the data plane other than dropping the packet that’s on the wire at that point in time. I now know that’s not true. I also didn’t give much thought to the control plane in the past. That’s oftentimes a mistake :)

      I’d actually like to write up a whole post to answer your question. I hope you don’t mind waiting a little bit more for a detailed answer.

  10. Another follow on reply/update …

    I was hopeful that the 7.2.0.D1.1 NX-OS release was going to allow ports from different FEXes to be members of the same portgroup on our Nexus 7710. However, after upgrading, I get the same error:

    PTL-N77-01(config-if)# int eth106/1/1
    PTL-N77-01(config-if)# switchport access vlan 21
    PTL-N77-01(config-if)# channel-group 11 mode active
    PTL-N77-01(config-if)# int eth107/1/1
    PTL-N77-01(config-if)# switchport access vlan 21
    PTL-N77-01(config-if)# channel-group 11 mode active
    command failed: port not compatible [Members in multiple FEX]

    The release notes were vague in this area (inspiring hope) but it appears I interpreted them incorrectly… :(


  11. Hi Joe,
    We got 10 x c2960x-48fpd-l access switches and 2xc3850-12xs-e core switches
    6 x GLC-T
    i want to stack c2960 switch and then stack 2xc3850 switch
    what the best approch , should i use cross-stack etherchannel between c3850x and c2960x?

    1. Hi Emes,

      Yeah I always recommend cross chassis EC. I can’t think of any reason not to.

      A couple of observations though:

      – You can stack 8 high when using 2960Xs so you will need at least two stacks for your 10 switches
      – You don’t have enough SFPs to properly manage your oversubscription.

      You should aim for an oversub ratio of between 10:1 and 20:1. That means you should have a minimum of 2 uplinks for every 48 ports in the stack, and possibly more based on usage. You don’t have enough to meet that need.

      Another related issue is that you can probably only bundle 8 interfaces into an EC on the cat2k and cat3k (I’m pretty sure on this number but haven’t validated it). So in a stack of 8 high with 2x ports on each member, that’s of course 16 ports which means you’d need 2 ECs. And the issue there is that you’ve now got 2x logical uplinks which means Spanning Tree now comes into play. Complicated, but there it is :-) And having STP in the mix really undoes a lot of the benefits of stacking and cross-chassis EC.

      In order to avoid this, you might want to build smaller stacks (4 high), use 8 interfaces for uplinks in a single EC and have a decent oversub ratio and eliminate STP.

      Hope this helps!

Leave a Reply

Your email address will not be published. Required fields are marked *

Would you like to subscribe to email notification of new comments? You can also subscribe without commenting.