Cisco UCS Platform: Would A Blade Server Chassis By Any Other Name Smell As Sweet? #datacenter

By | Cisco, Networking, Storage, UCS, VMware | No Comments

When a familiar brand such as Nike or McDonald’s releases a new product, an amazing thing occurs which has absolutely nothing to do with what they are selling. These companies have such a strong brand name relationship with the consumer, that the product is immediatly assigned value. Prospective customers are open to at least trying it, while loyal customers have an auto-reaction to purchase (Apple anyone?).

This is due in no small measure to the apex marketing departments that these companies employ, which deliver results time after time. I would speculate that nine out of ten folks would immediately have a high level of confidence in the new product after only seeing a couple of well lit pictures, accompanied by the product’s jingle or tag line all crammed in under the thirty seconds allotted by the normal network commercial slot.

You know what industry that tenth guy is in?
Information Technology.

The fellow members of this exclusive group are adrift in a sea of product knowledge doubt. We want to touch it. We want not only to see it work, but work in our shop. It seems the moment we hear about a new product or idea we immediately figure out why it WON’T work.

This is obviously a survival mechanism. In the IT world we have all been burned by the silver tongued, snake oil swindling technology salesman. We put our reputation on the line for a piece of hardware and software that just didn’t deliver, which immediately means that our users feel that WE didn’t deliver. Never again…

Recently, Cisco released their new Unified Computing System (UCS). While Cisco has certainly made its fair share of false steps in the past, their track record still holds up as better than most. However, I was not prepared for the level of doubt by the teams I have met with right out of the gate. What really caught me by surprise was the amount of urban legend surrounding the product. Most folks simply consider it a new blade chassis. Technically true, but one could also say a fire truck is just another truck. Yet, not all speculation was bad. I have also heard good things that aren’t true, such as the ability to spread a single VM to multiple blades.

Trucks: One carries a dalmation along. The other only has room for one.

What intrigues me most about the platform isn’t a single feature of the solution; it’s how it all comes together. IP KVMs are common, but in the UCS it is included. Being able to lay down a server hardware “profile” between generations of blades makes expansion more straightforward and reduces opportunities for error. The network in the chassis itself looks like a line card in the Nexus switches that sits on top, simplifying management. The list just keeps on going.

Maybe that’s the issue. We are expecting a single feature to divide the new Cisco solution from the sea of server chassis platforms out there, but instead it is dozens of nuances that really deserve a deeper delving into Cisco’s UCS details.

So I finally had enough and asked Cisco to do a “dispel the rumors” seminar for a rather large group of customers who wanted to get to the bottom of this.

Not only will we view the slides, but our “test drive” of the UCS will center mainly around seeing how the management interface works and the chassis itself. All questions will be answered and all myths exposed. Personally, I am most interested in recieving the feedback after the event to hear what folks were most surprised to learn; however, overall, I think the event will be awesome for everyone, including me. The more time I spend with the platform, the more my knowledge of it’s innerworkings increase: I can see how it’s not all about the server blade, but instead about the integration points to the network, storage and especially VMware.

Photo Credits via Flickr: robin_24, Squiggle and wonderbjerg

EMC VSI Plug-in To The Rescue! Saving You From Management via Pop Up Screens (#fromthefield)

By | Clariion, EMC, Networking, Virtualization, VMware | No Comments

Most administrators have multiple monitors so that they can manage multiple applications with one general view. Unfortunately, what ends up happening is that your monitors start looking like a pop up virus—a window for storage, a window for networking, a window for email and a window for Internet.

EMC and VMware brought an end managing storage to your virtual environment. If you haven’t heard already, EMC has released new VMware EMC storage plug-ins. Now I don’t know about you, but as a consultant and integrator I can tell you mounting NFS shares to VMware is a bit of a process. If you’re not familiar with either Celerra or Virtual provisioning, adding NFS storage can be a hassle, no doubt.

1. You have to create the interfaces on the Control Station.
2. Create a file system.
3. Create NFS export and add all host to root and access boxes.
4. Create a Datastore.
5. Scan each host individually until storage appears in every host.


EMC VSI unified storage plug-in will allow you to provision NFS storage from your Celerra right from Virtual center client. The only thing that needs to be completed ahead of time is the DataMover interfaces. Once you configure the interfaces, you’ll be able to provision NFS storage from you Virtual Center client. When you are ready to provision storage download and install the plug-in and NaviCLI from your Powerlink account, open your Virtual client, right click your host, select EMC->provision storage and the wizard will take care of the rest. When the wizard asks for an array, select either Celerra or Clariion (if you select Celerra you will need to enter the root password). The great thing about the plug-in is that it allows VMware administrators the ability to provision storage from your VC interface.

The EMC VSI pool management plug-in allows you to manage your block-level storage and your VC client as well. We all know the biggest pain is having to rescan each host over and over again just so they each see the storage. Congratulations! The VSI pool management tool allows you to both provision storage and scan all HBA’s in the Cluster, all with just a single click. With EMC storage viewer locating your LUN, volumes are just as easy. Once installed, the storage viewer will allow having a full view into your storage environment right from your VC client.

In summary, these plug-ins will increase your productivity and give some room back to your monitors. If you don’t have Powerlink accounts sign up for one at It’s easy to sign up for and will have more information on how to manage VMware and EMC products.

Hope you have enjoyed my experience from the field!

Photo Credit:PSD

Update on FCoE: The Current State of Real World Deployments

By | Networking, Storage | No Comments

FCoE has been out in the marketplace now for approximately two years and I thought it’d be good to discuss what we’re seeing in the real world regarding deployment.


For those not familiar with Fibre Channel over Ethernet (FCoE), it is being hailed as a key new technology that is a first step towards consolidation of the Fibre Channel storage networks and Ethernet data networks. This has several benefits including simplified network management, elimination of redundant cabling, switches, etc., as well as reduced power and heat requirements. Performance over the Ethernet network is similar to a traditional Fibre Channel network, because the 10Gb connection is “lossless”. Essentially, FCoE encapsulates FC frames in Ethernet packets and uses Ethernet instead of Fibre Channel links. Underneath it all, it is still Fibre Channel. Storage management is done in a very similar manner to traditional FC interfaces.


Across the IDS customer base in the Midwest, adoption is still relatively low. I would attribute this to the fact that many customers in the Midwest have found that traditional 1GbE iSCSI bandwidth will suffice for their environment. They never had a need to implement Fibre Channel, hence, there is little need to move to a FCoE environment. The most common FCoE switch is the Nexus 5000. Although some customers may not implement FCoE, we are seeing significant adoption of the Nexus line, with the 5000 often being used as a straight 10GbE switch. Even for medium-sized businesses that haven’t seen a need to adopt 10GbE, the drive to virtualize more will require greater network aggregate bandwidth at the ESX server, making 10GbE a legitimate play. In this case, the customer can simply continue to run iSCSI or NFS over this 10GbE connection, without implementing FCoE.

NFS and iSCSI are great, but there’s no getting away from the fact that they depend on TCP retransmission mechanics. This is a problem in larger environments, which is why Fibre Channel has continued to remain a very viable technology. The higher you go in the network protocol stack, the longer the latencies that occur in various operations. This can mean seconds, and normally many tens of seconds for state/loss of connection. EMC, NetApp, and VMware recommend that timeouts for NFS and iSCSI datastores be set to at least 60 seconds. FCoE expects most transmission loss handling to be done at the Ethernet layer, for lossless congestion handling and legacy CRC mechanisms for line errors. This means link state sensitivity is in the millisecond or even microsecond range. This is an important difference that ultimately is behind why iSCSI didn’t displace Fibre Channel in larger environments.

Until recently, storage arrays were not supporting native FCoE connectivity. NetApp was first to market with FCoE support, though there were some caveats and the technology was “Gen 1”, which most folks prefer to avoid in production environments. Native FCoE attach also did not support a multi-hop environment. FCoE has been ratified as a standard now, some of the minor “gotchas” have been taken care of with firmware updates, and EMC has also released UltraFlex modules for the CX/NS line that allow you to natively attach your array to a FCoE enabled switch. These capabilities will most certainly accelerate the deployement of more FCoE.

At the host-level, early versions of the Converged Network Adapter (CNA) were actually two separate chipsets included on a single PCI card. This was a duct-tape and bailing wire way to get host support for FCoE out to the market quickly. Now, Gen2 CNA’s are hitting the market, which are based upon a single-chipset. FCoE on the motherboard is also coming in the not-too-distant future, and these developments will also contribute to accelerated adoption of FCoE.


The best use case for FCoE is still for customers who are building a completely new data center, or refreshing their entire data center network. I would go so far as to say it is a no-brainer to deploy 10GbE infrastructure in these situations. For customers with bandwidth exceeding 60MB/sec, it will most certainly make sense to leverage FCoE functionality. With a 10GbE infrastructure in place already, the uplift to implement FCoE should be relatively minimal. One important caveat to consider before implementing a converged infrastructure is to have organization discussions about management responsibility of the switch infrastructure. This will particularly apply to environments where the network team is separate from the storage team. Policies and procedures will have to be put in place for one group to manage the device, or create ACL’s and a rights-delegation structure that allow the LAN team to manage LAN traffic and the storage team to manage SAN traffic over the same wire.

The above option is a great use-case, but it still involves a fair amount of pieces and parts despite being streamlined as compared to an environment where LAN and SAN were completely separate. Another use case for implementing FCoE today that is incredibly simple and streamlined is to make it part of a server refresh. The Cisco UCS B-series blade-chassis offers some impressive advantages over other blade options, and FCoE is built right in. This allows the management and cabling setup of the Cisco UCS to be much cleaner as compared to other blade chassis options. With FCoE already being part of the UCS chassis right out of the box, there is relatively little infrastructure changes required in the environment, management is handled from the same management GUI as the blade chassis, and there is no need to do any cabling other than perhaps add a FC uplink to an existing FC SAN environment if one exists.

Note: Further reading on FCoE

(Image credit: karamchedu from Flickr)

7.5 Reasons Why I Like the Nexus 5000 Series Switches

By | Cisco, Networking | No Comments

1) vPC – Virtual Port Channels

A Port Channel based connectivity option that allows downstream switches and hosts to connect to a pair of Nexus 5000 vPC peer switches as if they were one switch. This allows the host or switch to use two or more links at full capacity.

2) Copper SFP+ Twinax cable

A low power and cost effective option to connect servers and FEX modules to the Nexus 5000. Twinax cables are available in 1,3,5 and now 7 and 10-meter lengths.

3) Nexus 2000 fabric extenders – FEX

These are like “remote line cards” that attach and are managed by the Nexus 5000. 24 and 48 port 1 Gbps FEX’s are available, and so are 32 port 1/10 Gbps FEX’s. These can be connected to the Nexus 5000 with SFP+ Copper Twinax cable, and for longer runs, SFP+ optics.

4) Expansion Modules

Each Nexus switch has one or two open expansion modules. These can accommodate a variety of modules, including additional 10 gig ports, 4 and 8 Gbps native fiber channel ports, and even a mix of both.

5) The new Nexus 5548

Up to 48 10 Gig ports & 960 Gbps throughput in a 1U chassis!

6) Unified Fabric

LAN and SAN on the same layer 2 Ethernet. This allows full SAN/LAN redundancy with just two cables per server. Great for ESX servers, which can use many network, and fiber channel cables.

7) NX-OS

Cisco’s highly resilient & modular operating system, which is based on Cisco’s rock-solid MDS 9000 SAN-OS.

And for the that extra .5 reason why I like the 5000 series switch, drumroll please…

7.5) It’s Silver!