All Posts By

Michael Freisinger

Cisco, Cisco UCS, UCS, Zoning, Fabric Interconnects, IDS, Michael Freisinger

Opening New Doors with Cisco UCS Zoning of Fabric Interconnects

By | Cisco, Storage, UCS | No Comments

With the release of UCS Manager 2.1 (which was released in Q4 of 2012) full zoning configurations are now supported on the Fabric Interconnects via attaching a SAN to the Fabric Interconnect with Appliance Ports.  Why is this interesting?

With full zoning configuration supported, the Fabric Interconnects can now function as a Fiber Channel switch thus eliminating the need for a separate costly Fiber Channel fabric.  This opens the door for smaller environments to look at Fiber Channel within the Cisco UCS platform.

Configuring the Fabric Interconnects for zoning is relatively simple.

Here it is outlined at a high level:

  1. Put the Fabric Interconnects into FC Switch Mode (reboot required)
  2. Configure Unified Ports, setting ports as Storage Ports (reboot required)
  3. Create your VSANs (one per Fabric Interconnect, making sure to Enable FC Zoning) and assign them to the Storage Ports
  4. Create your Storage Connection Policies
  5. Add your FC Target (typically your SAN WWPN)
  6. Create your SAN Connection Policies
  7. Add your vHBA Initiator Groups (assigning your vHBA templates to your Storage Connection Policies)
  8. Associate your newly created SAN Connection Policies to the appropriate Service Profile Template

Once your blades are booted, the vHBA’s will log into your SAN and you will be able to perform the necessary SAN steps to present LUNs to the blades.

After the entire configuration in the steps above is performed, UCS Zoning is a very nice, fast and cool technology. On a large scale, Cisco UCS Zoning is not a replacement for Fiber Channel switching fabric.  However, in a smaller environment where the Cisco UCS only needs to interconnect with the SAN storage, it could be a great fit.

 

Photo by fulanoinc

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

Cabling, Keeping it Simple

By | Cisco, UCS, VMware | No Comments

Cisco, Cisco UCS, Cisco Unified Computer System, UCS Chassis, VMWare, VMware Deployment, host, host cluster, Fabric Interconnects, Ethernet Uplinks, core, switching

When I was shadowing and learning the Cisco Unified Computer System (UCS) one thing my mentor kept commenting on was how clean the cabling was and how much of it was reduced. I am typically implementing the UCS Chassis with a VMware deployment and cabling is a top priority.

Before working with the UCS Chassis a typical VMware deployment would be a 3 to 5 physical host cluster.

Now, let’s just take a step back and do quick check list on what the physical components might look like.

 

5 Physical Hosts, each host has the following:

• 2 Power Cords
• 6 or 4 Ethernet Cables for (management, vMotion, VM Network and Storage)
• 1 for DRAC, iLO, CICM or similar remote management

That totals up a possible of 9 cables and this would be the minimum, some instances call for more. Remember that is only 1 host. We still need to cable 4 more hosts! Now your cable total is at 45.

45 CABLES! Are you kidding me?!?!

Take a look at a UCS Chassis with 5 blades. Let’s assume we are using a Cisco UCS 5108 Chassis and 2 Cisco 6248 Fabric Interconnects with 1GB Ethernet Uplinks to the core switching and direct connect to the storage device.

• 8 Power Cords (2 for each Fabric Interconnect & 4 for the UCS Blade Chassis)
• 8 (Twinax) Server Uplinks from the Chassis to the Fabric Interconnects
• 8, 1GB Ethernet (4 from each Fabric Interconnect) to the core
• 4, Fiber (2 from each Fabric Interconnect) to the Storage device

That totals up to 28 cables for the entire environment that is almost half of the physical servers. Plus you still have three slots available on the UCS Chassis to add 3 more blades and you don’t have to add any more cables.

Another beauty of the Cisco UCS Chassis is that if you need to add more network adapters to your hosts there are no additional cabling required. Just a few simple changes on your blades and you’re done.

Photo by Lacrymosa

Choosing the Best Replication with VMware vCenter Site Recovery Manager: vSphere vs. Array-based

By | Replication, Virtualization, VMware | No Comments

I recently had the opportunity to implement VMware vCenter Site Recovery Manager (SRM) in three different environments using two different replication technologies (vSphere and Array-based Replication). The setup and configuration of the SRM software is pretty much straightforward. The differences come into play when deciding on what the best replication option is for your business needs.

vSphere Replication

vSphere Replication is built into SRM 5.0 and is included no matter what replication technology you decide to use. With vSphere Replication, you do not need to have costly identical storage arrays at both your sites, because the replication is managed through vCenter. With the ability to manage though vCenter, you are given more flexibility in regard to which VMs are protected. VMs can be protected individually, as opposed to doing so at the VMFS datastore. vSphere Replication is deployed and managed by virtual appliances installed at both sites. Replication is then handled by the ESXi hosts, with the assistance of the virtual appliances. vSphere Replication supports RPOs as low as 15 minutes.

[framed_box] vSphere Replication Benefits:

  • No need for costly storage arrays at both sites
  • More flexibility in choosing which VMs are protected (can do so individually)
[/framed_box] [divider_padding]

Array-based Replication

The two Array-based Replication technologies that I implemented were EMC MirrorView and EMC Symmetrix. Both of these tie into SRM using a storage replication adapter (SRA). The SRA is a program that is provided by the array vendor that allows SRM access to the array. Configuration of replication is done outside of vCenter at the array level. Unlike vSphere Replication, Array-based Replication requires you to protect an entire VMFS datastore or LUN, as opposed to individual VMs. One of the biggest benefits of Array-based Replication is its ability to provide automated re-protection of the VMs and near-zero RPOs.

[framed_box] vSphere Replication Benefits:

  • Automated re-protection of VMs
  • Near-zero RPOs
[/framed_box] [divider_padding]

Final Thoughts

VMware vCenter Site Recovery Manger gives you disaster recovery management that is highly sought after in today’s market, allowing you to perform planned migrations, failover and failback, automated failback and non-disruptive testing.

Photo credit: adamhenning via Flickr

Networking, Router, VMware

Adventures In Networking: Hardships In Finding The Longest Match

By | How To, Networking, VMware | No Comments

Networking, Router, VMwareSometimes in life you have to learn things the hard way. Recently I learned why the Longest Match Rule (Longest Match Algorithm) works and why it is applied not only to routing, but to other situations as well.

I was adding a new storage array and datastores to an existing VMware cluster using iSCSI. The VMware existing environment was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”]vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16)
vSwitch2 = vMotion (10.12.1.0/16)
vSwitch3 = Testing
[/framed_box]
The new storage array and iSCSI targets landed on a new vSwitch (vSwitch4). The old environment had both iSCSI and vMotion on the same network (10.12.1.0/16). For the new environment I wanted to completely separate the iSCSI and vMotion traffic by assigning them to different networks. Both iSCSI networks needed to stay up for migrations to happen so the new environment was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16)
vSwitch2 = vMotion (10.12.1.0/24)
vSwitch3 = Testing
vSwitch4 = iSCSI (10.12.2.0/24)
[/framed_box]

First, vSwitch4 was created where the new storage was configured and presented to VMware, just as planned. The problem occurred when the subnet mask on vSwtich2 was modified from /16 to /24. As soon as this change to the subnet mask on vSwitch2 happened, access to all the VM went down. After scrambling for about 5 minutes to retrace the steps prior the problem I was able to determine that it was the subnet change that caused the outage. Changing the subnet mask on vSwitch2 back to /16 slowly brought everything back online.

What caused this outage?

One simple mistake!

When the subnet was changed from /16 to /24 the third octet also needed to be changed to differentiate the iSCSI and vMotion networks. When the /24 subnet was applied to vSwitch2 (10.12.1.0 network) the Longest Match Rule matched the longer extended network prefix. This change also applied for vSwitch1 and any data within the /16 network would traverse the /24 thus dropping the iSCSI targets and all its datastores.

A network that has a longest match describes a smaller set of IP’s than the network with a shorter match. This in turn means the longer match is more specific than the shorter match. The 10.12.1.0/24 is the selected path because it has the greatest number of matching bits in the destination IP address of the packets (see below).

[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] #1, 10.12.1.0/24 = 00001010.00001100.00000001.00000000
#2, 10.12.0.0/16 = 00001010.00001100.00000000.00000000
#3, 10.0.0.0/8 = 00001010.00000000.00000000.00000000
[/framed_box]
By simply changing the third octet on vSwitch2 I was able to change the subnet to /24.
The final and working configuration was laid out as follows:
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] vSwitch0 = VM Network & Service Console (10.1.1.0/16)
vSwitch1 = iSCSI (10.12.1.0/16): left for migration
vSwitch2 = vMotion (10.12.3.0/24)
vSwitch3 = Testing
vSwitch4 = iSCSI (10.12.2.0/24)
[/framed_box]

Photo From: maximilian.haack

Networking & The Importance Of VLANs

By | Networking, Replication, VMware | No Comments

We have become familiar with the term VLANs when talking about networking. Some people cringe and worry when they hear “VLAN”, while others rejoice and relish the idea. I used to be in the camp that cringed and worried – only because I did not have some basic knowledge about VLANs.

So let’s start with the basics: what is a VLAN? 

VLAN stands for Virtual Local Area Network and has the same characteristics and attributes as a physical Local Area Network (LAN). A VLAN is a separate IP sub-network which allows for multiple networks and subnets to reside on the same switched network – services that are typically provided by routers. A VLAN essentially becomes its own broadcast domain. VLANs can be structured by department, function, or protocol, allowing for a smaller layer of granularity. VLANs are defined on the switch by individual ports; this allows VLANs to be placed on specific ports to restrict access. 

A VLAN cannot communicate directly with another VLAN, which is done by design. If VLANs are required to communicate with one another the use of a router or layer 3 switching is required. VLANs are capable of spanning multiple switches and you can have more than one VLAN on multiple switches. For the most part VLANs are relatively easy to create and manage. Most switches allow for VLAN creation via Telnet and GUI interfaces, which is becoming increasingly popular.

VLAN’s can address many issues such as:

  1. Security – Security is an important function of VLANs. A VLAN will separate data that could be sensitive from the general network.  Thus allowing sensitive or confidential data to traverse the network decreasing the change that users will gain access to data that they are not authorized to see. Example: An HR Dept.’s computers/nodes can be placed in one VLAN and an Accounting Dept.’s can be place in another allowing this traffic to completely separate. This same principle can be applied to protocol such as NFS, CIFS, replication, VMware (vMotion) and management.
  2. Cost – Cost savings can be seen by eliminating the need for additional expensive network equipment. VLANs will also allow the network to work more efficiently and command better use of bandwidth and resources.
  3. Performance – Splitting up a switch into VLANs allows for multiple broadcast domains which reduces unnecessary traffic on the network and increases network performance.
  4. Management: VLANs allow for flexibility with the current infrastructure and for simplified administration of multiple network segments within one switching environment.

VLANs are a great resource and tool to assist in fine tuning your network. Don’t be afraid of VLANs, rather embrace them for the many benefits that they can bring to your infrastructure.

Photo Credit: ivanx

Removing Ghosted NIC’s When Converting Physical to Virtual Machines

By | VMware | No Comments

It is highly likely in the future that you will be converting a physical machine to a virtual machine in your environment and in most cases you will be assigning the same IP to that virtual machine that the physical machine had. However, when you try enter or modify the IP address for your VMware adapter NIC you will get the following error:

“The IP address XXX.XXX.XXX.XXX you have entered for this network adapter is already assigned to another adapter Name of adapter. Name of adapter is hidden from the network and Dial-up Connections folder because it is not physically in the computer or is a legacy adapter that is not working. If the same address is assigned to both adapters and they become active, only one of them will use this address. This may result in incorrect system configuration. Do you want to enter a different IP address for this adapter in the list of IP addresses in the advanced dialog box?”

This error message is caused when an NIC with the same IP address is located in the registry, but hidden in the Device Manager.  This message is not limited to just doing a P2V. You can also come across this error when upgrading the VMware virtual hardware or VMware tools in some cases. While this error is not a show stopper and most of the time you will be able to assign the same IP to the VMware adapter NIC, why not just remove the ghosted NIC and eliminate the error  – thereby reducing any possible problems in the future.

The following steps will show you how to remove the ghosted NIC so that you can update the VMware adapter NIC with the same IP address:

  1. Open a Command Prompt.
  2. At the command prompt enter: SET DEVMGR_SHOW_NONPRESENT_DEVICES=1.
  3. Enter: START DEVMGMT.MSC.
  4. Device Manager will now open up, Select View > Show Hidden Devices.
  5. Expand Network Adapter: here you will see current and hidden NIC’s. Hidden NIC’s will appear dimmed.
  6. Right click on the dimmed NIC and click UnInstall.
  7. Close Device Manager.
  8. Close the Command Prompt.

The IP that was assigned to the old or hidden NIC has now been removed. This IP can now be assigned to the virtual NIC. When doing a P2V these steps can be done prior to connecting the VMware adapter.

Photo Credit: Ryan.Riot

What’s Better Than Advil For Datacenter Headaches? An Organized Datacenter!

By | How To, Networking | No Comments

How does your datacenter, server rack or switch rack look? Like a rats nest? 

If you had a power problem how long would it take you to track down the bad power cord? Additionally how much time would you waste figuring out if it was a bad network cable, NIC, or switch port? 

Yes, it can be a daunting task to organize your datacenter, but if the time is spent now it could save you hours and many headaches later. As an IT professional, you put a lot of money into your datacenter and it is a valuable asset to your company. It is essential to keep it organized to reduce an outage window and maximize your investment in the technology you have implemented. Taking some additional time up front to plan and organize will protect your equipment, extend its life, and make management easier.

POWER

Labeling both ends of your power cables will allow you to easily trace a power cable.  Doing so has many benefits:

  • The most obvious is that in the event of a power failure your aforethought will allow you to easily trace the failed power cable
  • If the need arises to load balance power between different PDU’s, circuits or power supplies
  • When the time comes to decommission or move the equipment already in your datacenter, you will know with simply a glance which power cables can be removed.

Running power cables to the edge of a server rack or switch rack will eliminate clutter in the event that a failed power supply needs to be replaced. Thus making replacing the failed part a simple and fast process. Using either vertical or horizontal PDU’s will depend on the type of racks you have and will allow cables to be neatly routed and managed.

NETWORK

  • Labeling both ends of your network cables eliminates the need to trace cable when a problem happens, replacing cables, testing or moving to a different switch/port.
  • Once again – running network cables to the edge of the server rack or a switch rack will eliminate clutter and make replacing a failed part a simple and fast process.
  • Color coordinating your network cables assists in trouble shooting and moves, it is also a visual reference for the purpose of the cable.

COOLING

Have you ever wondered where all of the hot air expelled from the equipment in your server and switch racks ends up?  The majority of the time it is out of the back/rear. If you have a rats nest of cables behind the fans, then they are not going to be able to expel the hot air away from the racks. This could lead to not only overheating of the equipment, but possibly the room.

  • If not expelled – stagnate hot air will, over time, cause your cables to become brittle and lead to failure. Unused cables can also pose a serious fire risk.

MANAGEMENT

  • Troubleshooting, replacing, removing and adding new equipment, power or network will be a simple task if your datacenter is organized and clean.
  • Not only will the management be easier for you, it will also allow guests in your datacenter to easily figure out where a network cable goes. If a problem ever does arise where a technician needs to work on some equipment he/she would have easy access.

Photo Credit: mrtom

Create Your Datacenter Storybook Ending Using VMware vSphere Update Manager To Update ESX/ESXi Hosts

By | How To, VMware, vSphere | No Comments

Once upon a time, I had an issue where all of my guests/VM’s would suddenly fall of the network. I could not ping the guests from anywhere in the network, yet the guests never went down; because I was still able to console into them from vSphere. After spending hours on support calls with VMware I finally got in touch with a tech who knew exactly what the problem was as soon as I described the symptoms. The fix was a simple patch which had been issued a few months prior. The tech explained the importance of using Update Manager and walked me through the steps below. If you aren’t familiar with Update Manager, it is a feature of vSphere which assists in a centralized, automated patch and version management for your ESX/ESXi hosts. Update Manager can also be used to manage your virtual machines and virtual appliances.

1.  Let’s start by attaching or creating a baseline. You can attach the baseline to either the Datacenter, Host Cluster or Host. For this example we will attach the baseline to the Host.

A)  Go to Hosts and Clusters.
B)   Select your host and click Update Manager.
C)   Click Attach.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/1R.jpg” width=”425″ height=”375″]

 

2.  Using the two default baselines already created:

A)  Select both Host Patches and Non-Critical Host Patches.
B)  Click Attach.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/2R.jpg” width=”510″ height=”350″]

 

 

3.  Next you will need to perform a scan, the process in which the attributes of a host are compared to the patches in the attached baseline to see what needs to be applied. This can be done by either:

A) Right clicking the object and selecting Scan for Updates
B)  Or clicking Scan… in the Update Manager tab.

Once the scan is finished you will see an overview of your host compliance.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/3R.jpg” width=”480″ height=”250″]

 

 

4.  Next: staging the patches to the hosts. Staging allows you to download the patches from the Update Manager server to the ESX/ESXi hosts. Staging is optional and speeds up the remediation process, however staging will reduce the downtime of the host during remediation.

A)  You may either right click on the object and select Stage Patches … or click the Stage button on the Update Manager tab.
B)   Select the baselines you want to stage and click Next.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/4RR.jpg” width=”498″ height=”269″]

 

C)   Deselect any patches that you don’t want staged and click Next.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/5R.jpg” width=”488″ height=”337″]

 

D)   Review and click Finish.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/6R1.jpg” width=”498″ height=”349″]

 

E)   The patches will now be staged to the host, and their progress can be monitored in the Recent Tasks window.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/7.jpg” width=”478″ height=”150″]

 

5.  Once the patches are staged they can be remediated.

A)   You can either right click on the object and select Remediate … or click the Remediate button on the Update Manager tab.
B)    Select the target objects and click Next.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/8R.jpg” width=”498″ height=”348″]

 

C)   Select the patches to apply and click Next.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/9R.jpg” width=”498″ height=”348″]

 

D)   Within Host Remediation Options you can give the task a name, a description, and schedule the remediation for later immediately or later on. For the purposes of this example we will take the defaults and click Next.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/10R.jpg” width=”498″ height=”348″]

 

E)   Review the options selected and click Finish.

 [iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/11R.jpg” width=”498″ height=”348″]

 

F)   The remediation process will take a while and can be monitored in the Recent Tasks window. The remediation process puts the host into Maintenance Mode and must be rebooted at least once.

6.  When the remediation process is complete you can verify that all the patches have been applied by going to the Update Manager tab and viewing the Host Compliance.

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/12R.jpg” width=”498″ height=”260″]

 

And your datacenter and Update Manager lived happily ever after!

Photo Credit: Florin Gorgan

float(1)