Category

UCS

Cisco, Cisco UCS, UCS, Zoning, Fabric Interconnects, IDS, Michael Freisinger

Opening New Doors with Cisco UCS Zoning of Fabric Interconnects

By | Cisco, Storage, UCS | No Comments

With the release of UCS Manager 2.1 (which was released in Q4 of 2012) full zoning configurations are now supported on the Fabric Interconnects via attaching a SAN to the Fabric Interconnect with Appliance Ports.  Why is this interesting?

With full zoning configuration supported, the Fabric Interconnects can now function as a Fiber Channel switch thus eliminating the need for a separate costly Fiber Channel fabric.  This opens the door for smaller environments to look at Fiber Channel within the Cisco UCS platform.

Configuring the Fabric Interconnects for zoning is relatively simple.

Here it is outlined at a high level:

  1. Put the Fabric Interconnects into FC Switch Mode (reboot required)
  2. Configure Unified Ports, setting ports as Storage Ports (reboot required)
  3. Create your VSANs (one per Fabric Interconnect, making sure to Enable FC Zoning) and assign them to the Storage Ports
  4. Create your Storage Connection Policies
  5. Add your FC Target (typically your SAN WWPN)
  6. Create your SAN Connection Policies
  7. Add your vHBA Initiator Groups (assigning your vHBA templates to your Storage Connection Policies)
  8. Associate your newly created SAN Connection Policies to the appropriate Service Profile Template

Once your blades are booted, the vHBA’s will log into your SAN and you will be able to perform the necessary SAN steps to present LUNs to the blades.

After the entire configuration in the steps above is performed, UCS Zoning is a very nice, fast and cool technology. On a large scale, Cisco UCS Zoning is not a replacement for Fiber Channel switching fabric.  However, in a smaller environment where the Cisco UCS only needs to interconnect with the SAN storage, it could be a great fit.

 

Photo by fulanoinc

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

Cabling, Keeping it Simple

By | Cisco, UCS, VMware | No Comments

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

When I was shadowing and learning the Cisco Unified Computer System (UCS) one thing my mentor kept commenting on was how clean the cabling was and how much of it was reduced. I am typically implementing the UCS Chassis with a VMware deployment and cabling is a top priority.

Before working with the UCS Chassis a typical VMware deployment would be a 3 to 5 physical host cluster.

Now, let’s just take a step back and do quick check list on what the physical components might look like.

 

5 Physical Hosts, each host has the following:

• 2 Power Cords
• 6 or 4 Ethernet Cables for (management, vMotion, VM Network and Storage)
• 1 for DRAC, iLO, CICM or similar remote management

That totals up a possible of 9 cables and this would be the minimum, some instances call for more. Remember that is only 1 host. We still need to cable 4 more hosts! Now your cable total is at 45.

45 CABLES! Are you kidding me?!?!

Take a look at a UCS Chassis with 5 blades. Let’s assume we are using a Cisco UCS 5108 Chassis and 2 Cisco 6248 Fabric Interconnects with 1GB Ethernet Uplinks to the core switching and direct connect to the storage device.

• 8 Power Cords (2 for each Fabric Interconnect & 4 for the UCS Blade Chassis)
• 8 (Twinax) Server Uplinks from the Chassis to the Fabric Interconnects
• 8, 1GB Ethernet (4 from each Fabric Interconnect) to the core
• 4, Fiber (2 from each Fabric Interconnect) to the Storage device

That totals up to 28 cables for the entire environment that is almost half of the physical servers. Plus you still have three slots available on the UCS Chassis to add 3 more blades and you don’t have to add any more cables.

Another beauty of the Cisco UCS Chassis is that if you need to add more network adapters to your hosts there are no additional cabling required. Just a few simple changes on your blades and you’re done.

Photo by Lacrymosa

UCS Undressed: Inside A Cisco UCS C Series Installation

By | Cisco, How To, UCS | No Comments

Over the past few weeks I have had the opportunity to install the Cisco C series of rack servers for the first time. Throughout the installation I was very impressed at the ease of installing memory and extra PCI cards, as compared with my previous experiences. It was basically one screw! The space and simplicity made it exponentially easier as maneuvering around wires and plastic was obsolete. The model I installed was a UCS C210 M2 – which is 2U in height.

I ran into one problem right away, or at least I thought I did! To set up the background I had to install a quad port 1 gigabit NIC card and a dual ported 4 gigabit fiber channel card in each server. From my past experiences I knew that I had to figure out which of the five PCIe slots would be able to support the throughput. After looking it up on the tech sheet I came to find that all five do – they support PCIe 2.0 x8. This made my installation a lot easier as well as informing me for future decisions regarding upgrades.

After racking the servers it was time to configure the CIMC. Like other remote management cards the process was to type in the IP address and go, after pressing F2 to get to setup. The CIMC – Cisco Integrated Management Controller, is a full featured remote management solution which includes:

> KVM
> power control
> system alerting
> virtual media
> BIOS updating
> … a plethora of other applications

It is equivalent to the HP Integrated Lights Out – iLO and the  Dell Remote Access Card – DRAC; and is standard on the C series. One aspect of the CIMC that caught my attention instantly was that the virtual media stays connected even after a power cycle – so there is no pausing the system to reconnect to an ISO after a restart.

I am more than impressed with the ease and simplicity of installing and configuring the C-Series servers. Over the next few weeks I will be getting my hands on a full UCS chassis filled with B-series blades – and I am looking forward to the proven simplicity and ease from Cisco.

Cisco UCS: On Trial in the Court of Public Opinion (and the Verdict Is?)

By | Cisco, UCS | No Comments

There have been some rumors floating around regarding the viability of Cisco’s UCS in the market and whether Cisco will continue with it.

My take is that we are talking about a generation 1.5 product and as anyone in technology knows, newcomers to a vertical always have to make a land grab, even if you are positioned as Cisco, a monster in the switching vertical. The question at hand is:

Does UCS differentiate itself as a product in the market?

I can honestly say even the authors of the imminent end of UCS concede that it does. Once Cisco is confident in the market share, which in my opinion is happening at a frightening pace, we will see UCS prices (and Cisco margins) begin to rise. Frankly, I don’t think Cisco field reps know how to sell this yet—again, yet. Once they get the hang of it, Dell is going to find themselves in real trouble in the blade space.

Also, it is a mistake to think of UCS as a server platform: the true beauty of it is how it ties server and network physical management into a single construct with the modularity and flexibility to adapt and upgrade as the standards in both spaces evolve (again, at a frightening pace). The competitors’ management tools usually come at a cost and still do not offer the same abilities of the tools that Cisco is giving away for free. And finally, I would be interested to see how much Nexus has been sold à la carte vs. inside the UCS. I would consider UCS a success for Cisco if only due to the remarkable Nexus footprint it generated, as Nexus is now positioned as the obvious future of Cisco.

I don’t think UCS is going anywhere but into more data centers. Granted, I am sold on the technology more than the business angle, but IT folks are generally willing to pay more for the best and most scalable tools out there. Oddly enough, today with UCS, they don’t have to.

Photo Credit: steakpinball

Cisco UCS Platform: Would A Blade Server Chassis By Any Other Name Smell As Sweet? #datacenter

By | Cisco, Networking, Storage, UCS, VMware | No Comments

When a familiar brand such as Nike or McDonald’s releases a new product, an amazing thing occurs which has absolutely nothing to do with what they are selling. These companies have such a strong brand name relationship with the consumer, that the product is immediatly assigned value. Prospective customers are open to at least trying it, while loyal customers have an auto-reaction to purchase (Apple anyone?).

This is due in no small measure to the apex marketing departments that these companies employ, which deliver results time after time. I would speculate that nine out of ten folks would immediately have a high level of confidence in the new product after only seeing a couple of well lit pictures, accompanied by the product’s jingle or tag line all crammed in under the thirty seconds allotted by the normal network commercial slot.

You know what industry that tenth guy is in?
Information Technology.

The fellow members of this exclusive group are adrift in a sea of product knowledge doubt. We want to touch it. We want not only to see it work, but work in our shop. It seems the moment we hear about a new product or idea we immediately figure out why it WON’T work.

This is obviously a survival mechanism. In the IT world we have all been burned by the silver tongued, snake oil swindling technology salesman. We put our reputation on the line for a piece of hardware and software that just didn’t deliver, which immediately means that our users feel that WE didn’t deliver. Never again…

Recently, Cisco released their new Unified Computing System (UCS). While Cisco has certainly made its fair share of false steps in the past, their track record still holds up as better than most. However, I was not prepared for the level of doubt by the teams I have met with right out of the gate. What really caught me by surprise was the amount of urban legend surrounding the product. Most folks simply consider it a new blade chassis. Technically true, but one could also say a fire truck is just another truck. Yet, not all speculation was bad. I have also heard good things that aren’t true, such as the ability to spread a single VM to multiple blades.

Trucks: One carries a dalmation along. The other only has room for one.
 

What intrigues me most about the platform isn’t a single feature of the solution; it’s how it all comes together. IP KVMs are common, but in the UCS it is included. Being able to lay down a server hardware “profile” between generations of blades makes expansion more straightforward and reduces opportunities for error. The network in the chassis itself looks like a line card in the Nexus switches that sits on top, simplifying management. The list just keeps on going.

Maybe that’s the issue. We are expecting a single feature to divide the new Cisco solution from the sea of server chassis platforms out there, but instead it is dozens of nuances that really deserve a deeper delving into Cisco’s UCS details.

So I finally had enough and asked Cisco to do a “dispel the rumors” seminar for a rather large group of customers who wanted to get to the bottom of this.

Not only will we view the slides, but our “test drive” of the UCS will center mainly around seeing how the management interface works and the chassis itself. All questions will be answered and all myths exposed. Personally, I am most interested in recieving the feedback after the event to hear what folks were most surprised to learn; however, overall, I think the event will be awesome for everyone, including me. The more time I spend with the platform, the more my knowledge of it’s innerworkings increase: I can see how it’s not all about the server blade, but instead about the integration points to the network, storage and especially VMware.

Photo Credits via Flickr: robin_24, Squiggle and wonderbjerg

float(1)