L3 vPC Support on Nexus 5k

So... I'm a little embarrased to admit this but I only very recently found out that there are significant differences in how Virtual Port Channels (vPC) behave on the Nexus 5k vs the Nexus 7k when it comes to forming routing adjacencies over the vPC.

Take the title literally!

I've read the vPC Best Practice whitepaper and have often referred others to it and also referred back to it myself from time to time. What I failed to realize is that I should've been taking the title of this paper more literally: it is 100% specific to the Nexus 7k. The behaviors the paper describes, particularly around the data plane loop prevention protections for packets crossing the vPC peer-link, are specific to the n7k and are not necessarily repeated on the n5k.

Read more β†’

Avoiding an ISSUe on the Nexus 5000

The idea for this post came from someone I was working with recently. Thanks Fan (and Carson, and Shree) :-)

In Service Software Upgrade (ISSU) is a method of upgrading software on a switch without interrupting the flow of traffic through the switch. The conditions for successfully completing an ISSU are usually pretty strict and if you don't comply, the hitless upgrade can all of a sudden become impacting.

The conditions for ISSU on the Nexus 5000 are pretty well documented (cisco.com link) however, there are a couple bits of knowledge that are not. This post is a reminder of the ISSU conditions you need to comply with and a call out to the bits of information that aren't so well documented.

Read more β†’

Speaking Notes: The Data Center Network Evolution

I will be presenting at the Cisco Connect Canada tour in Edmonton and Calgary on November 3rd and 5th, respectively. My presentation is about that three letter acronym that everyone loves to hate: SDN :-)

I will talk about SDN in general terms and describe what it really means; what we're really doing in the network when we say that it's "software defined". No unicorns or fairy tales here, just engineering.

Next I'll talk about three areas where Cisco is introducing programmability into its data center solutions:

  • Application Centric Infrastructure
  • Virtual Topology System
  • Open NX-OS

Below are the notes I made for myself while researching these topics and preparing for the presentation. At the bottom of this post is a Q&A section with some frequently asked questions.

Read more β†’

Five Functional Facts about VXLAN

It seems appropriate to write a FFF post about Virtual Extensible LAN (VXLAN) now since VXLAN is the new hotness in the data center these days. With VMware's NSX using VLXAN (among other overlays) as a core part of its overall solution and the recent announcement of Cisco's Application Centric Infrastructure (ACI) and the accompanying Nexus 9000 switch, both of which leverage VXLAN for delivering a network fabric, it seems inevitable that network engineers will have to use and understand VXLAN in the not too distant future.

As usual, this post is not meant to be an introduction to the technology; I assume you have at least a passing familiarity with VXLAN. Instead, I will jump right into 5 operational/technical/functional aspects of the protocol.

For more information on VXLAN, check out the draft at the IETF.

Read more β†’

Five Functional Facts about OTV

Following on from my previous "triple-F" article (Five Functional Facts about FabricPath), I thought I would apply the same concept to the topic of Overlay Transport Virtualization (OTV). This post will not describe much of the foundational concepts of OTV, but will dive right into how it actually functions in practice. A reasonable introduction to OTV can be found in my series on Data Center Interconnects.

So without any more preamble, here are five functional facts about OTV.

Read more β†’

DCI with LISP for Cold Migrations

Let's step back for a minute. So far in this series of blog posts on DCI, I've been focusing on extending the Layer 2 domain between data centers with the goal of supporting hot migrations β€” ie, moving a virtual machine between sites while it's online and servicing users.

Is that the only objective with DCI?

Read more β†’

Cisco onePK: Now I Get It

I had an opportunity recently to sit in a Cisco onePK lab and it opened my eyes to exactly what Cisco is doing with onePK, why it's going to be so important as Software Defined Networking (SDN) continues to gain traction, and why onePK is different than what anyone else is doing in the industry.

onePK is a key element within Cisco's announced Open Network Environment SDN strategy. onePK is an easy-to-use toolkit for development, automation, rapid service creation and more. It enables you to access the valuable data inside your network via easy-to-use APIs.

Source: www.cisco.com/go/onepk

Since having my own eyes opened, I've been pondering how to explain my new found understanding in a way that others will grasp. In particular to business decision makers (BDMs) and technical decision makers (TDMs). I'm really, really, struggling to come up with a good analogy for BDMs. I'm still working on that one. Surprisingly, I'm also struggling to come up with a sound analogy that will work with the majority of TDMs that I know. Maybe I shouldn't be so surprised at that since all the TDMs I deal with are on the infrastructure side of things (networks, storage, compute, platform) and really don't deal with software. There's a gap there that I somehow need to bridge. I'm still pondering how to successfully do that.

However, there is a slice of the TDM population that I believe I can reach right now. These folks, like myself, have software and network experience. Maybe through open source projects, previous careers, or just mucking about with LAMP stacks in their own lab/home network, they understand programming semantics, APIs, and extending the functionality of third-party software.

I'm going to use a popular open source software package to draw some parallels with what Cisco onePK will soon allow organizations to do to their networks.

Read more β†’

An Introduction to the Nexus 7700

We're halfway through 2013 and we have our second new member of the Nexus family of switches for the year: the Nexus 7700. Here are the highlights:

  • Modular, chassis-based system
    • 18 slot (16 IO modules) and 10 slot (8 IO modules)
  • True front-to-back airflow
  • New fabric modules
    • (6) fabric modules (maximum) per chassis
    • 220G per slot per fabric module
    • 1.32Tbps per IO module slot
  • Supports F2E and newly announced F3 IO modules
Read more β†’

DCI: Using FabricPath for Interconnecting Data Centers

Here's a topic that comes up more and more now that FabricPath is getting more exposure and people are getting more familiar with the technology: Can FabricPath be used to interconnecting data centers?

For a primer on FabricPath, see my pervious article Five Functional Facts about FabricPath .

FabricPath has some characteristics that make it appealing for DCI. Namely, it extends Layer 2 domains while maintaining Layer 3 β€” ie, routing β€” semantics. End host MAC addresses are learned via a control plane, FP frames contain a Time To Live (TTL) field which purge looping packets from the network, and there are no such thing as blocked links β€” all links are forwarding and Equal Cost Multi-Pathing (ECMP) is used within the fabric. In addition, since FabricPath does not mandate a particular physical network topology, it can be used in spine/leaf architectures within the data center or point-to-point connections between data centers.

Sounds great. Now what are the caveats?

Read more β†’

Nexus 7000 IO Module SKU Cheat Sheet

Wow the title of this post is a mouthful.

Similar to my previous post on the Nexus 2000 (Nexus 2000 Model Number Cheat Sheet), this post will explain what the letters and numbers mean in the Nexus 7000 IO module part numbers. This will allow you to quickly identify the characteristics of the card just by looking at the part number which in turn should help you out as you're building BOMs and picking the right card for the job.

Update July 2, 2013: Updated to reflect release of the Nexus 7700 and F3 modules.

Read more β†’

DCI Series: Overlay Transport Virtualization

This is the third article in my series on Data Center Interconnection (DCI). In the first (Why is there a "Wrong Way" to Interconnect Data Centers?) I wrote about the risks associated with DCI when the method chosen is to stretch Layer 2 domains between the data centers. In the second article (DCI: Why is Stretched Layer 2 Needed?) I wrote about why the need exists for stretching Layer 2 domains between sites and also touched on why it's such a common element in many DCI strategies.
Read more β†’

DCI: The Need for Stretched Layer 2

In the previous article in this DCI series (Why is there a "Wrong Way" to Interconnect Datacenters?) I explained the business case for having multiple data centers and then closed by warning that extending Layer 2 domains was a path to disaster and undermined the resiliency of having two data centers.

Why then is stretching Layer 2 a) needed and b) a go-to maneuver for DCI.

Let's look at it from two points of view: technology and business.

Read more β†’

Why is there a "Wrong Way" to Interconnect Datacenters?

There's certainly a lot of focus on data center interconnection (DCI) right now. And understandably so since there are many trends in the industry that are making IT organizations look at data center redundancy. Among these trends are:

  1. The business is saying to IT that they require their IT services to be available at all times. In effect the business is saying that they want to be shielded from technology issues, maintenance windows, and unplanned downtime because the IT services they consume (not all of them mind you, but certainly some of them) are so critical to running the business that they cannot be without them (or, they cannot be without them for whatever period of time it would take IT to recover the service).
  2. The technical ability to move workloads between sites thanks to the near ubiquity of features like vMotion and Live Migration. The ability to pick up an application and swing it over to another location makes item #1 above far less daunting to IT shops and lowers the barrier to adoption.

In this post I'm going to talk about how IT can address item #1 above β€” the business need β€” in a manner that does not introduce hidden risk into the environment. This is a common conversation that a lot of IT organizations are having right now but unfortunately the easiest and most obvious outcome from those conversations is not always the one with the least risk.

In the second post of this DCI mini series, I'll talk more about item #2 since that's the one that drives a lot of the technical requirements that have to be met when delivering the overall solution to address #1.

Read more β†’

An Introduction to the Nexus 6000

There's a new Nexus in the family, the Nexus 6000. Here are the highlights.

Nexus 6001 Nexus 6004
Size 1 RU 4 RU
Ports 48 x 10G + 4 x 40G 48 x 40G fixed + 48 x 40G expansion
Interface type SFP+ / QSFP+ QSFP+
Performance Line rate Layer 2 and Layer 3
Latency 1ΞΌs port to port
Scalability 128K MAC + 128K ARP/ND (flexible config), 32K route table, 1024-way ECMP, 31 SPAN sessions
Features L2/L3, vPC, FabricPath/TRILL, Adapter FEX, VM-FEX
Storage FCoE
Visibility Sampled Netflow, buffer monitoring, latency monitoring, microburst monitoring, SPAN on drop/high latency
Read more β†’

Address Learning and the TRILL/FabricPath Control Plane

Do you ever find yourself in a conversation with someone where you attempt to explain a concept in detail and you realize that you don't know that concept at the level of detail that you thought you did? That happened to me recently. I thought I had a better handle on TRILL and FabricPath than I really did. Since I retain things far better when I write them down, I'm going to blog the differences between TRILL and FabricPath when it comes to address learning and what role the control plane plays in building the network topology

Read more β†’

Cisco UCS Manager 2.1 Highlights

Cisco UCS Manager 2.1 Highlights
Service Profile Renaming Yes, finally, you can rename service profiles. No more struggling to name your profiles perfectly the first time. When a profile is renamed, all the unique attributes including the MACs, WWNs, UUID, etc, are preserved. This can be done when the server is live and online without any impact. VM-FEX for Microsoft Hyper-V and KVM In addition to vSphere, VM-FEX (which I've written about here) is now available when using the Hyper-V or KVM hypervisors on UCS.
Read more β†’

Nexus 2000 Model Number Cheat Sheet

A colleague of mine pointed something out the other day: the numbers and letters that make up the Nexus 2000 (FEX) model actually have meaning! No, I haven't been living under a rock. I think it's pretty clear that with a model number like "2248TP-E" the "22" indicates this is the 2200 series FEX and the "48" indicates it's got 48 ports. But what about the letters that follow the numbers?

Read more β†’

What the fex is a FEX anyways?

This is a quick, high level rundown of Cisco's various fabric extender technologies and where each fits into the data center.

Read more β†’

Doing Etherchannel Over 3, 5, 6, and 7 Link Bundles

As a follow-up to my previous article on Port Channels titled "4 Types of Port Channels and When They're Used" I wanted to talk a bit about the long-standing rule that says you should always create your Etherchannel (EC) bundles with a number of links that works out to a power of two (ie, 2,4 or 8 links). That rule is less applicable today than it used to be.

Read more β†’

4 Types of Port Channels and When They're Used

The other day I was catching up on recorded content from Cisco Live! and I saw mention of yet another implementation of port channels (this time called Enhanced Virtual Port Channels). I thought it would make a good blog entry to describe the differences of each, where they are used, and what platforms each is supported on.

Read more β†’

Five Features of Brocade VCS

Virtual Cluster Switching (VCS) is Brocade's brand of datacenter ethernet switching. VCS allows for the creation of a network fabric that's capable of converging storage and data traffic via standards-based datacenter bridging. It also solves the "Spanning Tree Protocol (STP) problem" by implementing a standards-based TRILL data plane paired with their own control plane in the form of Fabric Shortest Path First (FSPF). This data + control plane enable the "routing" of MAC addresses through the fabric, negates the need for STP, enables the use of all cabled links, and prevents traffic loops. VCS is only (currently) available on the VDX line of switches from Brocade.

In this post I'm going to outline five aspects of VCS that I found particularly interesting or unique. This is a companion article to an earlier one titled Five Functional Facts about FabricPath where I broke down five features of Cisco's fabric technology.

Read more β†’

Five Functional Facts about FabricPath

FabricPath is Cisco's proprietary, TRILL-based technology for encapsulating Ethernet frames across a routed network. Its goal is to combine the best aspects of a Layer 2 network with the best aspects of a Layer 3 network.

  • Layer 2 plug and play characteristics
  • Layer 2 adjacency between devices
  • Layer 3 routing and path selection
  • Layer 3 scalability
  • Layer 3 fast convergence
  • Layer 3 Time To Live field to drop looping packets
  • Layer 3 failure domain isolation

An article on FabricPath could go into a lot of detail and be many pages long but I'm going to concentrate on five facts that I found particularly interesting as I've learned more about FabricPath.

Read more β†’