Cisco, Cisco UCS, UCS, Zoning, Fabric Interconnects, IDS, Michael Freisinger

Opening New Doors with Cisco UCS Zoning of Fabric Interconnects

By | Cisco, Storage, UCS | No Comments

With the release of UCS Manager 2.1 (which was released in Q4 of 2012) full zoning configurations are now supported on the Fabric Interconnects via attaching a SAN to the Fabric Interconnect with Appliance Ports.  Why is this interesting?

With full zoning configuration supported, the Fabric Interconnects can now function as a Fiber Channel switch thus eliminating the need for a separate costly Fiber Channel fabric.  This opens the door for smaller environments to look at Fiber Channel within the Cisco UCS platform.

Configuring the Fabric Interconnects for zoning is relatively simple.

Here it is outlined at a high level:

  1. Put the Fabric Interconnects into FC Switch Mode (reboot required)
  2. Configure Unified Ports, setting ports as Storage Ports (reboot required)
  3. Create your VSANs (one per Fabric Interconnect, making sure to Enable FC Zoning) and assign them to the Storage Ports
  4. Create your Storage Connection Policies
  5. Add your FC Target (typically your SAN WWPN)
  6. Create your SAN Connection Policies
  7. Add your vHBA Initiator Groups (assigning your vHBA templates to your Storage Connection Policies)
  8. Associate your newly created SAN Connection Policies to the appropriate Service Profile Template

Once your blades are booted, the vHBA’s will log into your SAN and you will be able to perform the necessary SAN steps to present LUNs to the blades.

After the entire configuration in the steps above is performed, UCS Zoning is a very nice, fast and cool technology. On a large scale, Cisco UCS Zoning is not a replacement for Fiber Channel switching fabric.  However, in a smaller environment where the Cisco UCS only needs to interconnect with the SAN storage, it could be a great fit.


Photo by fulanoinc

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

Cabling, Keeping it Simple

By | Cisco, UCS, VMware | No Comments

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

When I was shadowing and learning the Cisco Unified Computer System (UCS) one thing my mentor kept commenting on was how clean the cabling was and how much of it was reduced. I am typically implementing the UCS Chassis with a VMware deployment and cabling is a top priority.

Before working with the UCS Chassis a typical VMware deployment would be a 3 to 5 physical host cluster.

Now, let’s just take a step back and do quick check list on what the physical components might look like.


5 Physical Hosts, each host has the following:

• 2 Power Cords
• 6 or 4 Ethernet Cables for (management, vMotion, VM Network and Storage)
• 1 for DRAC, iLO, CICM or similar remote management

That totals up a possible of 9 cables and this would be the minimum, some instances call for more. Remember that is only 1 host. We still need to cable 4 more hosts! Now your cable total is at 45.

45 CABLES! Are you kidding me?!?!

Take a look at a UCS Chassis with 5 blades. Let’s assume we are using a Cisco UCS 5108 Chassis and 2 Cisco 6248 Fabric Interconnects with 1GB Ethernet Uplinks to the core switching and direct connect to the storage device.

• 8 Power Cords (2 for each Fabric Interconnect & 4 for the UCS Blade Chassis)
• 8 (Twinax) Server Uplinks from the Chassis to the Fabric Interconnects
• 8, 1GB Ethernet (4 from each Fabric Interconnect) to the core
• 4, Fiber (2 from each Fabric Interconnect) to the Storage device

That totals up to 28 cables for the entire environment that is almost half of the physical servers. Plus you still have three slots available on the UCS Chassis to add 3 more blades and you don’t have to add any more cables.

Another beauty of the Cisco UCS Chassis is that if you need to add more network adapters to your hosts there are no additional cabling required. Just a few simple changes on your blades and you’re done.

Photo by Lacrymosa

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

Advice from the Expert, Best Practices in Utilizing Storage Pools

By | Backup, Cisco, Data Loss Prevention, EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Storage Pools for the CX4 and VNX have been around a while now, but I continue to still see a lot of people doing things that are against best practices. First, let’s start out talking about RAID Groups.

Traditionally to present storage to a host you would create a RAID Group which consisted of up to 16 disks, the most typical used RAID Groups were R1/0, R5, R6, and Hot Spare. After creating your RAID Group you would need to create a LUN on that RAID Group to present to a host.

Let’s say you have 50 600GB 15K disks that you want to create RAID Groups on, you could create (10) R5 4+1 RAID Groups. If you wanted to have (10) 1TB LUNs for your hosts you could create a 1TB LUN on each RAID Group, and then each LUN has the guaranteed performance of 5 15K disks behind it, but at the same time, each LUN has at max the performance of 5 15K disks.
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] What if your LUNs require even more performance?

1. Create metaLUNs to keep it easy and effective.

2. Make (10) 102.4GB LUNs on each RAID Group, totaling (100) 102.4GB LUNs for your (10) RAID Groups.

3. Select the meta head from a RAID Group and expand it by striping it with (9) of the other LUNs from other RAID Groups.

4. For each of the other LUNs to expand you would want to select the meta head from a different RAID Group and then expand with the LUNs from the remaining RAID Groups.

5. That would then provide each LUN with the ability to have the performance of (50) 15K drives shared between them.

6. Once you have your LUNs created, you also have the option of turning FAST Cache (if configured) on or off at the LUN level.

Depending on your performance requirement, things can quickly get complicated using traditional RAID Groups.

This is where CX4 and VNX Pools come into play.
[/framed_box] EMC took the typical RAID Group types – R1/0, R5, and R6 and made it so you can use them in Storage Pools. The chart below shows the different options for the Storage Pools. The asterisks notes that the 8+1 option for R5 and the 14+2 option for R6 are only available in the VNX OE 32 release.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inNow on top of that you can have a Homogeneous Storage Pool – a Pool with only like drives, either all Flash, SAS, or NLSAS (SATA on CX4), or a Heterogeneous Storage Pool – a Storage Pool with more than one tier of storage.

If we take our example of having (50) 15K disks using R5 for RAID Groups and we apply them to pools we could just create (1) R5 4+1 Storage Pool with all (50) drives in it. This would then leave us with a Homogeneous Storage Pool, visualized below.High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

The chart to the right displays what will happen underneath the Pool as it will create the same structure as the traditional RAID Groups. We would end up with a Pool that contained (10) R5 4+1 RAID Groups underneath that you wouldn’t see, you would only see the (1) Pool with the combined storage of the (50) drives. From there you would create your (10) 1TB LUNs on the pool and it will spread the LUNs across all of the RAID Groups underneath automatically. It does this by creating 1GB chunks and spreading them across the hidden RAID Groups evenly. Also you could turn FAST Cache on or off at the Storage Pool level (if configured).

On top of that, the other advantage to using a Storage Pool is the ability to create a Heterogeneous Storage Pool, which allows you to have multiple tiers where the ‘hot’ data will move up to the faster drives and the ‘cold’ data will move down to the slower drives.

Jon Blog photo 4Another thing that can be done with a Storage Pool is create thin LUNs. The only real advantage of thin LUNs is to be able to over provision the Storage Pool. For example if your Storage Pool has 10TB worth of space available, you could create 30TB worth of LUNs and your hosts would think they have 30TB available to them, when in reality you only have 10TB worth of disks.

The problem with this is when the hosts think they have more space than they really do and when the Storage Pool starts to get full, there is the potential to run out of space and have hosts crash. They may not crash but it’s safer to assume that they will crash or data will become corrupt because when a host tries to write data because it thinks it has space, but really doesn’t, something bad will happen.

In my experience, people typically want to use thin LUNs only for VMware yet will also make the Virtual Machine disk thin as well. There is no real point in doing this. Creating a thin VM on a thin LUN will grant no additional space savings, just additional overhead for performance as there is a performance hit when using thin LUNs.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inAfter the long intro to how Storage Pools work (and it was just a basic introduction, I left out quite a bit and could’ve gone over in detail) we get to the part of what to do and what not to do.

Creating Storage Pools

Choose the correct RAID Type for your tiers. At a high level – R1/0 is for high write intensive applications, R5 is high read, and R6 is typically used on large NLSAS or SATA drives and highly recommended to use on those drive types due to the long rebuild times associated with those drives.

Use the number of drives in the preferred drive count options. This isn’t always the case as there are ways to manipulate how the RAID Groups underneath are created but as a best practice use that number of drives.

Keep in mind the size of your Storage Pool. If you have FAST Cache turned on for a very large Storage Pool and not a lot of FAST Cache, it is possible the FAST Cache will be used very ineffectively and be inefficient.

If there is a disaster, the larger your Storage Pool the more data you can lose. For example, if one of the RAID Groups underneath having a dual drive fault if R5, a triple drive fault in R6, or the right (2) disks in R1/0.

Expanding Storage Pools

Use the number of drives in the preferred drive count options. If it is on a CX4 or a VNX that is pre VNX OE 32, the best practice is to expand by the same number of drives in the tier that you are expanding as the data will not relocate within the same tier. If it is a VNX on at least OE 32, you don’t need to double the size of the pool as the Storage Pool has the ability to relocate data within the same tier of storage, not just up and down tiers.

Be sure to use the same drive speed and size for the tier you are expanding. For example, if you have a Storage Pool with 15K 600GB SAS drives, you don’t want to expand it with 10K 600GB SAS drives as they will be in the same tier and you won’t get consistent performance across that specific tier. This would go for creating Storage Pools as well.

Graphics by EMC

IT, IT vendors, journey, IDS, learning

Being Successful, My Journey to Stay Sharp

By | Cisco, How To, VMware | No Comments

Learning is the one word that comes to mind when I think about being successful in the world of technology. In previous years I bought into the traditional method of learning by taking vendor training and follow-up exams. After failing an exam last year I began to understand that I had to develop a new methodology of learning. I wanted to pass IT exams on the first attempt and retain the required knowledge. I had to adapt my style of training.

Traditionally, companies understand that in order to keep their IT employees from leaving they have to offer incentives beyond money; most employees want to learn. The majority of companies I have worked for seem to follow a similar approach to training.

1. Determine the technical proficiency required
2. Train and learn the material deemed important by the vendor
A. Attend an authorized training course
B. Read books or PDF’s related to the subject
3. Take the exam
A. Re-take the exam if needed

training, journey, IT vendor, vendors, learning

I found that learning had a profound ripple affect well beyond my personal advantages. The company I worked for benefited from vendor partnerships as certain accreditations earned provided access to different markets and lead generation. When consulting with potential customers on the front-end (sales) or the back-end (implementation) the opportunity for additional business with the customer grows substantially. This happens because the customer feels confident that you are a subject matter expert. You become their trusted adviser.

When I was hired as a Post-Sales Engineer at Integrated Data Storage (IDS), I was informed about the training curriculum and introduced to the company’s learning methodology. The major issue I encountered with the learning cycle was how much there was to learn.

I recall the pain and aggravation of re-taking exams for EMC ISM, VMware VCP 4.1 and VMware VCA4-DT. Even though I spent suitable time studying the content, I was overwhelmingly devastated when I failed these exams. It was my goal to pass these exams on the first attempts; I was determined to diagnose the problem and change it.

One year ago I reviewed my approach to studying and I quickly discovered that all habits resembled that of the traditional learning method. Take a course then take a test. This structure was not working for me, so I began to create my own roadmap for success. I created a list of tools and resources that became indispensable, such as books, PDFs, computer-based training (CBTs), home labs, specialized learning centers, vendor specific training, blogs, and knowledgebase articles. I was immersed in training and embraced my new learning methodology.

In February 2012 I put my new study methods to the test. The results were immediate and positive. By combining multiple study strategies I took and passed the VCP5, VCP5-DT, NetApp NCDA and Citrix XenDesktop on my first attempt(s). Through a restructured training curriculum, I obtained my goal of passing these exams on the first attempt.

While revamping my studying habits I found several training secrets which contributed to my success.

TrainSignal is a Chicago-based company with CBTs that I loaded on my tablet for offline viewing. The instant online access interface is intuitive and easy to use and they offer transcender practice exams with select courses. The trainers at TrainSignal are some of the most respected, certified, talented and personable individuals in the industry. I was able to follow each of them on Twitter and ask questions through social media. The bonus for me was that TrainSignal offers a majority of their individual training courses for around $400.

Current Technologies Computer Learning Center (CTCLC) is a Portage, Indiana, learning center maintained by a team of certified instructors. CTCLC is authorized by vendors across many different technologies which allow easy access to exams and certifications. By being devoted to this local learning center, I was able to obtain extra stick time with valuable classroom hardware. Also, another great benefit to CTCLC is their flexibility in rescheduling courses. When an emergency at work required my immediate attention, the staff at CTCLC was kind enough to help reschedule my courses.

Benchmark Learning is an authorized learning center that specializes in technologies for specific vendors. I used Benchmark Learning for my Citrix XenDesktop certification as I was very impressed with their style and outline. Benchmark Learning kept their training status up-to-date on Citrix’s website. They were very responsive and accommodating to my request for scheduling.

Vendors provided additional training, which helped me obtain additional time learning specific solutions and technologies. Aside from the three companies mentioned, vendors like Nutanix, VMware, Citrix and EMC provided in-depth knowledge through partner related training videos, PDFs and white papers.

home labs, training, lab, exam, tests

Home Labs provided actual hands-on experiences for my training. Combined with the theory-based knowledge learned in classes, CBT videos and online material, I was able to solidify my knowledge about the specific solution and technologies by having these items available at my house. After checking E-bay and Craigslist, I found a VMware vSphere compatible server and began building my lab. My home lab now consists of several Dell servers, a free iSCSI SAN using OpenFiler, WYSE P20 Zero Client, HP laptop as a thin client, iPad, Mac Mini and a handful of trial licenses for VMware, Microsoft, Citrix, VEEAM, Liquidware Labs, TrendMicro and Quantum.

2013 is here and my vision for this year is to rebuild my home lab with even more hardware. My goal is to provide real design examples built on VMware and Citrix technologies to continue to take my learning to the next level.

Tech For Dummies: Cisco MDS 9100 Series Zoning & EMC VNX Host Add A “How To” Guide

By | Cisco, EMC, How To | No Comments

Before we begin zoning please make sure you have cabled each HBA to both switches assuring the host is connected to each switch. Now let’s get started …

Configuring and Enabling Ports with Cisco Device Manager:

Once your HBAs are connected we must first Enable and Configure the ports.

1. Open Cisco Device Manager to enable port:

[iframe src=”” width=”335″ height=”435″]

2. Type in the IP address, username and password of the first switch:

[iframe src=”” width=”335″ height=”335″]


3. Right-click the port you attached FC cable to and select enable:

[iframe src=”” width=”435″ height=”255″]

Cisco allows the usage of multiple VSANs (Virtual Storage Area Network). If you have created a VSAN other than VSAN 1 you must configure the port for the VSAN you created.

1. To do this, right-click the port you enabled and select “Configure”:

[iframe src=”” width=”335″ height=”335″]

2. When the following screen appears, click on Port VSAN and select your VSAN, then click “Apply”:

[iframe src=”” width=”635″ height=”335″]

3. Save your configuration by clicking on “Admin” and selecting “Save Configuration”, once the “Save Configuration” screen pops up and requests you to select “Yes”:

[iframe src=”” width=”635″ height=”435″]

[iframe src=”” width=”335″ height=”135″]

Once you have enabled and configured the ports, we can now zone your Hosts HBAs to the SAN.

Login to Cisco Fabric Manager:

1. Let’s begin by opening Cisco Fabric Manager:

[iframe src=”” width=”235″ height=”435″]

2. Enter FM server username and password (EMC Default admin; password) , then clock “Login”:

[iframe src=”” width=”335″ height=”335″]

3. Highlight the switch you intend to zone and select “Open”:

[iframe src=”” width=”635″ height=”335″]

4. Expand the switch and right-click “VLAN”, then select “Edit Local Full Zone Database”:

[iframe src=”” width=”635″ height=”435″]

Creating An FC Alias:

In order to properly manage your zones and HBAs, it is important to create an “FC Alias” for the WWN of each HBA. The following screen will appear:

1. When it does, right-click “FC-Aliases” and select “Insert”, once selected the next screen will appear. Type in the name of the host and HBA ID, example: SQL_HBAO. Click the down arrow and then select the WWN that corresponds to your server, finally click “OK”:

[iframe src=”” width=”635″ height=”635″]

Creating Zones:

Now that we have created FC-Aliases, we can now move forward creating zones. Zones are what isolates connectivity among HBAs and targets. Let’s begin creating zones by:

1. Right-clicking on “Zones”.
2. Select “Insert” from the drop down menu. A new screen will appear.
3. Type in the name of the “Zone”, for management purposes use the following format <name of FC-Alias host>_<Name of FC Alias Target> Example: SQL01_HBAO_VNX_SPAO.
4. Click “OK”:

[iframe src=”” width=”635″ height=”635″]

Note: These steps must be repeated to zone the hosts HBA to the second storage controller. In our case, VNX_SPB1.

Adding Members to Zones:

Once the Zones names are created, insert the aliases into the Zones:

5. Right-click on the Zone you created.
6. Select “Insert”, and a new screen will appear.
7. Select “FC-Alias”, click on “…” box then select Host FC Alias.
8. Select the target FC Alias, click “OK”, and click “Add”:

[iframe src=”” width=”635″ height=”335″]

[iframe src=”” width=”635″ height=”335″]

Creating Storage Groups:

Now that we have zoned the HBAs to the array, we can allocate storage to your hosts. To do this we must create “Storage Groups”, which will give access to LUNs in the array to the hosts connected to that array. Let’s begin by logging into the array and creating “Storage Groups”:

1. Login to Unisphere and select the array from the dashboard:

[iframe src=””  width=”335″ height=”335″]

2. Select “Storage Groups” under the Hosts tab:

[iframe src=”” width=”635″ height=”285″]

3. Click “Create” to create a new storage group:

[iframe src=”” width=”635″ height=”385″]

4. The following screen will appear, type in the name of the storage group. Typically you will want to use the name of the application or hosts cluster name.

[iframe src=”” width=”435″ height=”235″]

5. The screen below will pop up, at this time click “Yes” to continue and add LUNs and Hosts to the Storage Group:

[iframe src=”” width=”435″ height=”235″]

6. The next screen will allow you to select wither newly created LUNs or LUNs that already exist in other Storage Groups. Once you add the LUN or LUNs to the group, click on the hosts tab to continue to add hosts:

[iframe src=”” width=”635″ height=”635″]

7. In the hosts tab, select the Hosts we previously zoned and click on the forward arrow. Once the host appears in the right pane, click OK:

[iframe src=”” width=”635″ height=”635″]

8. At this point a new screen will pop up, click YES to commit.

[iframe src=”” width=”435″ height=”285″]

Once you have completed these tasks successfully, your hosts will see new raw devices. From this point on, use your OS partitioning tool to create volumes.

Photo Credit: imagesbywestfall

Bringing Sexy Back! With Cisco, VMware and EMC Virtualization

By | Cisco, EMC, Virtualization, VMware | No Comments

Yeah I said it: “IDS just brought Sexy Back!”

For a refresh a recent customer sought to finally step into the Virtual Limelight. This particular customer, whose vertical is in the medical industry; purchased four Cisco Chassis and eleven B200 blades.  Alongside the Cisco server they purchased an EMC VNX 5500 OE Unified Array with two Cisco MDS 9148 FC switches.

Our plan was to migrate over one hundred Virtual Machines onto fifteen physical ESX hosts to the new Cisco/VMware 5.0 environment.

Once we successfully moved the VM’s over we began virtualizing the remaining physical hosts. Now the reality is that not all hosts could be moved so abruptly, thus we are still in the process of converting the hosts. However, by just moving the ESX hosts and ten physical servers our client is already seeing tremendous drops in power usage, server management and data center capacity.

Here is what we started with, otherwise know as the “before sexy”:

A picture is worth a thousand words, so let me just show you exactly what “sexy” looks like in their current data center:

The moral of the story is not to dive head first into centralized storage and virtualization, but to consider what it costs to manage multiple physical servers with applications that under-utilize your hardware. Also good to keep in mind is what is costs to keep those servers operational (power/cooling) and maintained. If you don’t know what these figures look like, or how to bring sexy back into your data center – just ask me, resident Justin Timberlake over here at IDS.

Photo Credit: PinkMoose

How To: VMware High Availability for Blade Chassis

By | Cisco, Virtualization, VMware | No Comments

Vmware High Availability (HA) is a great feature that allows a guest Virtual Machines in a Cluster to survive a host failure. Some quick background is that a Cluster is a group of hosts that work together harmoniously and operate as a single unit. A host is a physical machine running a Hypervisor such as ESX.

So, what does HA do? If a host in the cluster fails then all of the machines fail. HA will power up the guests on another host in the cluster which can reduce downtime significantly, especially if your Datacenter is 30 minutes from your house at 2am. You can continue to sleep and address the host failure in the morning. Sounds great, so what’s the catch?

The catch is in how HA configures itself in the cluster. The first 5 hosts in a cluster are called primary node and all the other hosts are secondary nodes. A primary node synchronizes settings and status of all hosts in the cluster with other primary nodes. A secondary node basically reports its status to the primary node. Secondary nodes can be promoted to primary nodes, but only under specific circumstances. Circumstances include: putting a host in maintenance node and disconnecting a node from a cluster. HA only needs one primary node to function. I don’t see a catch here…?

The catch comes into the use of a blade center. Suppose you have Chassis A and Chassis B:

We bought two blade chassis for redundancy. Redundant power, switches, electricity, and cluster hosts spread across both. If one chassis fails then other one has plenty of resources. Fully redundant! Maybe. If I was to add my first 5 hosts to my cluster from chassis A then all of my primary nodes would be on chassis A. If chassis A fails, NO guests from the failed host will be powered up on chassis B. Why? All chassis B hosts are secondary nodes and HA requires at least 1 primary! It’s 2 am and now you’re half asleep driving to the datacenter despite all the redundancy.

To avoid this issue, when adding hosts to a cluster, alternate between chassis.

Sick Over Gateway Redundancy? Cisco’s Got A Solution For That …

By | Cisco, How To, Networking | No Comments

A testament to the ever adapting pioneers that they are, Cisco has developed the first gateways redundancy protocol: Hot Standby Router (HSRP). HSRP allows for default gateways to be failed over to another router, based on a priority that can rise or fall contingent upon interface tracking.

The Internet Engineering Task Force (IETF) created a standard that is almost identical: Virtual Router Redundancy Protocol (VRRP), as identified in RFC 2338. The only real differentiator is the terminology. If you have non-Cisco routers or are pairing between Cisco and another vendor then you are using VRRP.

Here is an example of the old days:

[iframe src=”” width=”535″ height=”525″]


Next in the long line of gateway redundancy protols came HSRP, which allows for failover of the default gateway. The only way to load balance was by creating two different HSRP groups: multiple HSRP (MSHRP), using different IP addresses for the default gateways. Hence you would have to configure Dynamic Host Configuration Protocol (DHCP) pools that give two separate gateway addresses for the SAME IP range. Sound painful, right?

Let’s look at general HSRP operation. For example: you could have Router 1 and Router 2 running HSRP which would both be tracking their WAN links. Below is normal HSRP operation: the router on the left is actively forwarding traffic as the default gateway, and the one on the right is waiting for it fail or lose its WAN link. Notice that the top router is doing absolutely nothing, aside from looking pretty.

[iframe src=”” width=”605″ height=”450″]


Now, the WAN link fails and the other router takes over.

[iframe src=”” width=”605″ height=”440″]


When the link goes down the other router takes over forwarding traffic. It is a time tested strategy, but if you have two routers why not utilize both?

Introducing another Cisco first: Global Load Balancing Protocol (GLBP). GLBP introduces two router roles:

  1. The Active Virtual Gateway (AVG): responsible for giving out the Media Access Control (MAC) address to the other routers as well as responding to clients Address Resolution Protocol (ARP) requests.
  2. The Active Virtual Forwarded (AVF).

The AVG generally gives out the MAC address in a round robin fashion (though there are other choices). Some clients get MAC for Router 1 and some recieve ONE IP address.

[iframe src=”” width=”605″ height=”490″]


Normal Operation:

[iframe src=”” width=”625″ height=”525″]


Now, I’m sure you are wondering what happens on a link failure or router loss.

Since there are only two routers in these scenarios, the AVG would take over for the MAC address, making the failover absolutely seamless. The router on the right would lose it’s link and report that it is no longer able to forward traffic. Ok, it might be a little more complicated than that, but you get the gist. 

[iframe src=”” width=”625″ height=”525″]


GLBP is a great solution for load balance and it offers your users seamless failover of their default gateway upon the failure of a router.

Perhaps the IETF will make this a standard too!

Photo Credit:DominiqueGodbout

Don’t Get Hung Out To Dry With The HCL: There’s OneCommand Manager for VMware vCenter …

By | Cisco, How To, View, VMware, vSphere | No Comments

Is nothing sacred?

As the professionally paranoid, we know all too well that we cannot take anything for granted when deploying a new solution.

However, one list that has long gone un-scrutinized by the typical IT professional is the published VMware Hardware Compatibility List. A fellow friend of mine in the IT space recently underwent the less than pleasant experience of having the beloved HCL fail him – resulting in the worst kind of IT issue: intermittent complete outages of his VMware hosts. He was hung – no vMotion – the only course of action being to reboot the ESXi host and pray the VM’s survive.

With weeks between host outages, the problem was almost impossible to pinpoint. Through detailed troubleshooting eventually the breadcrumbs led to the 10G Qlogic single port converged network adaptor (CNA). You’ll be as surprised as my friend was to find that this particular card is well documented as “supported” on VMware’s HCL.

Yes! Betrayed by the HCL! Making matters worse is the fact that the card is also fully supported by HP in his new DL385 G7 servers, as well as the Cisco Nexus switch into which it was plugged. While Qlogic is a well established player in the HBA/CNA space, their email only support did not live up to the Qlogic reputation. My friend and his entire team spent countless hours working on the issue with minimal to no support from Qlogic.

Backed into a corner they decided to take a chance on Emulex OCe11102-FX converged adapters, another formidable player in the market. Issues did arise again – but not stability issues: CIM functionality issues. Unlike their competition, Emulex stepped up to the plate and served up a home run. They took the time to recreate his issue in their lab and boiled it down to the order of the CIM software.

OneCommand Manager for VMware vCenter was then installed. Once the Emulex CIM was installed prior to the HP CIM, my friend finally achieved sustained stability and solid CIM functionality. Some lessons that were learned or reinforced by this experience:

  1. Make certain the hardware you are looking to invest in is on the VMware HCL.
  2. Google the specific hardware for reviews and/or comments on the VMware support forums.
  3. Research that the hardware vendor you select offers phone AND email support – not just email support.

Photo Credit: gemtek1

UCS Undressed: Inside A Cisco UCS C Series Installation

By | Cisco, How To, UCS | No Comments

Over the past few weeks I have had the opportunity to install the Cisco C series of rack servers for the first time. Throughout the installation I was very impressed at the ease of installing memory and extra PCI cards, as compared with my previous experiences. It was basically one screw! The space and simplicity made it exponentially easier as maneuvering around wires and plastic was obsolete. The model I installed was a UCS C210 M2 – which is 2U in height.

I ran into one problem right away, or at least I thought I did! To set up the background I had to install a quad port 1 gigabit NIC card and a dual ported 4 gigabit fiber channel card in each server. From my past experiences I knew that I had to figure out which of the five PCIe slots would be able to support the throughput. After looking it up on the tech sheet I came to find that all five do – they support PCIe 2.0 x8. This made my installation a lot easier as well as informing me for future decisions regarding upgrades.

After racking the servers it was time to configure the CIMC. Like other remote management cards the process was to type in the IP address and go, after pressing F2 to get to setup. The CIMC – Cisco Integrated Management Controller, is a full featured remote management solution which includes:

> power control
> system alerting
> virtual media
> BIOS updating
> … a plethora of other applications

It is equivalent to the HP Integrated Lights Out – iLO and the  Dell Remote Access Card – DRAC; and is standard on the C series. One aspect of the CIMC that caught my attention instantly was that the virtual media stays connected even after a power cycle – so there is no pausing the system to reconnect to an ISO after a restart.

I am more than impressed with the ease and simplicity of installing and configuring the C-Series servers. Over the next few weeks I will be getting my hands on a full UCS chassis filled with B-series blades – and I am looking forward to the proven simplicity and ease from Cisco.