There's a new Nexus in the family, the Nexus 6000. Here are the highlights.

Nexus 6001 Nexus 6004
Size 1 RU 4 RU
Ports 48 x 10G + 4 x 40G 48 x 40G fixed + 48 x 40G expansion
Interface type SFP+ / QSFP+ QSFP+
Performance Line rate Layer 2 and Layer 3
Latency 1μs port to port
Scalability 128K MAC + 128K ARP/ND (flexible config), 32K route table, 1024-way ECMP, 31 SPAN sessions
Features L2/L3, vPC, FabricPath/TRILL, Adapter FEX, VM-FEX
Storage FCoE
Visibility Sampled Netflow, buffer monitoring, latency monitoring, microburst monitoring, SPAN on drop/high latency

The biggest difference is the introduction of high density 40G. And this isn't 4x 10G, it's true 40G. Not only does the 6004 have the ability to push true 40G flows, but the ability to do 40G FCoE. That's a lot of bandwidth to have at the edge of the network.

The other feature that sticks out for me is the integrated, line-rate Layer 3 forwarding. The Nexus 5500 depends on an additional module to perform the L3 duties. This module connects to the switch fabric over a 16 x 10G port channel, limiting the L3 performance of the box to 160Gbps. The Nexus 6000 will be a good fit for those environments that need > 160Gbps of L3 close to the edge of the network.

Lastly, the port density on the 6004 is pretty nice. Up to 96 x 40G ports in 4RU. Additionally, ports can be broken out into 4 x 10G for very high density 10G connectivity (up to 384 x 10G in 4RU). You can even mix and match for 10G southbound and 40G northbound if you want.

Here's my personal opinion on where each of these switches will fit nicely into the network and what features make them suited for that position.

Nexus 6001

  • End of row aggregation for Nexus 2000 Fabric Extenders
    • Supports connectivity to FEXes just like Nexus 5500
    • vPC support
    • 48 x 10G ports in 1RU for that downstream connectivity
    • 4 x 40G northbound for an oversubscription ratio of 3:1 (and that using only 4 pieces of fiber and 4 ports on the upstream switch)
  • Leaf switch in a spine/leaf topology
    • FabricPath/TRILL support make this a nice leaf node
    • 10G connectivity for bringing edge devices onto the fabric
    • 4 x 40G ECMP into the spine layer (or 16 x 10G ECMP)
  • For very small environments, a nice collapsed core/aggregation switch
    • With the line-rate Layer 2 and Layer 3 functionality, this switch would be a good collapsed core/aggr box for a small data center
    • Plenty of 10G ports for high speed downstream connectivity
    • vPC for active/active uplinks from edge switches
    • 4 x 40G (or 16 x 10G) uplinks ready for expanding the environment into true core + aggregation

Nexus 6004

  • End of row aggregation for Nexus 2000 Fabric Extenders
    • Like the 6001 and 5500, supports FEX connectivity and vPC
    • Very high density 10G for connectivity down the row
    • 40G uplinks to the aggregation/core/spine
  • Spine or leaf switch in a Clos topology
    • FabricPath and TRILL support
    • 1024-way ECMP for very fine-grained load balancing of flows across fabric connections
  • For small to medium data data centers, the 6004 would make an awesome collapsed core/aggregation box
    • Very dense 10G for downstream connectivity; 40G if needed
    • Future-proofed to 40G all the way around
    • Line rate Layer and Layer 3 makes it a great Layer 2/Layer 3 boundary

And of course both switches are great candidates for a unified fabric with their support for storage networking in the form of Fibre Channel over Ethernet.

I'm pretty pumped about this switch. I think it's going to see a lot of interest and that there will be a lot of people eager to deploy it.


Disclaimer: The opinions and information expressed in this blog article are my own and not necessarily those of Cisco Systems.