All Posts By

Eli Mercado

Tech For Dummies: Cisco MDS 9100 Series Zoning & EMC VNX Host Add A “How To” Guide

By | Cisco, EMC, How To | No Comments

Before we begin zoning please make sure you have cabled each HBA to both switches assuring the host is connected to each switch. Now let’s get started …

Configuring and Enabling Ports with Cisco Device Manager:

Once your HBAs are connected we must first Enable and Configure the ports.

1. Open Cisco Device Manager to enable port:

[iframe src=”” width=”335″ height=”435″]

2. Type in the IP address, username and password of the first switch:

[iframe src=”” width=”335″ height=”335″]


3. Right-click the port you attached FC cable to and select enable:

[iframe src=”” width=”435″ height=”255″]

Cisco allows the usage of multiple VSANs (Virtual Storage Area Network). If you have created a VSAN other than VSAN 1 you must configure the port for the VSAN you created.

1. To do this, right-click the port you enabled and select “Configure”:

[iframe src=”” width=”335″ height=”335″]

2. When the following screen appears, click on Port VSAN and select your VSAN, then click “Apply”:

[iframe src=”” width=”635″ height=”335″]

3. Save your configuration by clicking on “Admin” and selecting “Save Configuration”, once the “Save Configuration” screen pops up and requests you to select “Yes”:

[iframe src=”” width=”635″ height=”435″]

[iframe src=”” width=”335″ height=”135″]

Once you have enabled and configured the ports, we can now zone your Hosts HBAs to the SAN.

Login to Cisco Fabric Manager:

1. Let’s begin by opening Cisco Fabric Manager:

[iframe src=”” width=”235″ height=”435″]

2. Enter FM server username and password (EMC Default admin; password) , then clock “Login”:

[iframe src=”” width=”335″ height=”335″]

3. Highlight the switch you intend to zone and select “Open”:

[iframe src=”” width=”635″ height=”335″]

4. Expand the switch and right-click “VLAN”, then select “Edit Local Full Zone Database”:

[iframe src=”” width=”635″ height=”435″]

Creating An FC Alias:

In order to properly manage your zones and HBAs, it is important to create an “FC Alias” for the WWN of each HBA. The following screen will appear:

1. When it does, right-click “FC-Aliases” and select “Insert”, once selected the next screen will appear. Type in the name of the host and HBA ID, example: SQL_HBAO. Click the down arrow and then select the WWN that corresponds to your server, finally click “OK”:

[iframe src=”” width=”635″ height=”635″]

Creating Zones:

Now that we have created FC-Aliases, we can now move forward creating zones. Zones are what isolates connectivity among HBAs and targets. Let’s begin creating zones by:

1. Right-clicking on “Zones”.
2. Select “Insert” from the drop down menu. A new screen will appear.
3. Type in the name of the “Zone”, for management purposes use the following format <name of FC-Alias host>_<Name of FC Alias Target> Example: SQL01_HBAO_VNX_SPAO.
4. Click “OK”:

[iframe src=”” width=”635″ height=”635″]

Note: These steps must be repeated to zone the hosts HBA to the second storage controller. In our case, VNX_SPB1.

Adding Members to Zones:

Once the Zones names are created, insert the aliases into the Zones:

5. Right-click on the Zone you created.
6. Select “Insert”, and a new screen will appear.
7. Select “FC-Alias”, click on “…” box then select Host FC Alias.
8. Select the target FC Alias, click “OK”, and click “Add”:

[iframe src=”” width=”635″ height=”335″]

[iframe src=”” width=”635″ height=”335″]

Creating Storage Groups:

Now that we have zoned the HBAs to the array, we can allocate storage to your hosts. To do this we must create “Storage Groups”, which will give access to LUNs in the array to the hosts connected to that array. Let’s begin by logging into the array and creating “Storage Groups”:

1. Login to Unisphere and select the array from the dashboard:

[iframe src=””  width=”335″ height=”335″]

2. Select “Storage Groups” under the Hosts tab:

[iframe src=”” width=”635″ height=”285″]

3. Click “Create” to create a new storage group:

[iframe src=”” width=”635″ height=”385″]

4. The following screen will appear, type in the name of the storage group. Typically you will want to use the name of the application or hosts cluster name.

[iframe src=”” width=”435″ height=”235″]

5. The screen below will pop up, at this time click “Yes” to continue and add LUNs and Hosts to the Storage Group:

[iframe src=”” width=”435″ height=”235″]

6. The next screen will allow you to select wither newly created LUNs or LUNs that already exist in other Storage Groups. Once you add the LUN or LUNs to the group, click on the hosts tab to continue to add hosts:

[iframe src=”” width=”635″ height=”635″]

7. In the hosts tab, select the Hosts we previously zoned and click on the forward arrow. Once the host appears in the right pane, click OK:

[iframe src=”” width=”635″ height=”635″]

8. At this point a new screen will pop up, click YES to commit.

[iframe src=”” width=”435″ height=”285″]

Once you have completed these tasks successfully, your hosts will see new raw devices. From this point on, use your OS partitioning tool to create volumes.

Photo Credit: imagesbywestfall

How To: Replicating VMware NFS Datastores With VNX Replicator

By | Backup, How To, Replication, Virtualization, VMware | No Comments

To follow up on my last blog regarding NFS Datastores, I will be addressing how to replicate VMware NFS Datastores with VNX replicator. Because NFS Datastores exist on VNX file systems, the NFS Datastores are able to replicate to an off-site VNX over a WAN. 

Leveraging VNX replicator allows you to use your existing WAN link to sync file systems with other VNX arrays. VNX only requires you to enable the Replication license of an offsite VNX and the use of your existing WAN link. There is no additional hardware other then the replicating VNX arrays and the WAN link.

VNX Replicator leverages checkpoints (snapshots) to record any changes made to the file systems. Once there are changes made to the FS the replication checkpoints initiates writes to the target keeping the FS in sync. 

Leveraging Replicator with VMware NFS DS will create a highly available virtual environment that will keep your NFS DS in sync and available remotely for whenever needed. VNX replicator will allow a maximum of ten minutes of “out-of-sync” time. Depending on WAN bandwidth and availability, your NFS DS can be restored ten minutes from the point of failure.

The actual NFS failover process can be very time consuming: once you initiate the failover process you will still have to mount the DS to the target virtual environment and add each VM into the inventory. When you finally have all of the VMs loaded, next you must configure the networking. 

Fortunately VMware Site Recovery Manager SRM has a plug-in which will automate the entire process. Once you have configured the policies for failover, SRM will mount all the NFS stores and bring the virtual environment online. These are just a few features of VNX replicator that can integrate with your systems, if you are looking for a deeper dive or other creative replication solutions, contact me.

Photo Credit: hisperati

Bringing Sexy Back! With Cisco, VMware and EMC Virtualization

By | Cisco, EMC, Virtualization, VMware | No Comments

Yeah I said it: “IDS just brought Sexy Back!”

For a refresh a recent customer sought to finally step into the Virtual Limelight. This particular customer, whose vertical is in the medical industry; purchased four Cisco Chassis and eleven B200 blades.  Alongside the Cisco server they purchased an EMC VNX 5500 OE Unified Array with two Cisco MDS 9148 FC switches.

Our plan was to migrate over one hundred Virtual Machines onto fifteen physical ESX hosts to the new Cisco/VMware 5.0 environment.

Once we successfully moved the VM’s over we began virtualizing the remaining physical hosts. Now the reality is that not all hosts could be moved so abruptly, thus we are still in the process of converting the hosts. However, by just moving the ESX hosts and ten physical servers our client is already seeing tremendous drops in power usage, server management and data center capacity.

Here is what we started with, otherwise know as the “before sexy”:

A picture is worth a thousand words, so let me just show you exactly what “sexy” looks like in their current data center:

The moral of the story is not to dive head first into centralized storage and virtualization, but to consider what it costs to manage multiple physical servers with applications that under-utilize your hardware. Also good to keep in mind is what is costs to keep those servers operational (power/cooling) and maintained. If you don’t know what these figures look like, or how to bring sexy back into your data center – just ask me, resident Justin Timberlake over here at IDS.

Photo Credit: PinkMoose

Integrating EMC RecoverPoint Appliance With VMware Site Recovery Manager

By | Disaster Recovery, EMC, How To, Virtualization, VMware | No Comments

For  my “from the field” post today, I’ll be writing about integrating EMC RecoverPoint Appliance (RPA) with VMware Site Recovery Manager (SRM). However, before we dive in, if you are not familiar with RPA technology, let me explain first with a high overview:

RPAs are a block LUN IP based replication appliance. RPAs are zoned via FC with all available storage ports.  RPAs leverage a “Replication Journal” to track changes within a LUN, once the LUNs have fully seeded between the two sites, the journal log will only send changed deltas over the WAN.  This allows you to keep your existing WAN link and not spend more money on WAN expansion.  The RPA’s use of the journal log allows it to efficiently track changes to the LUNS and replicate the differences over the WAN.  Because RPA can track the changes to the LUNs it can create a Bookmark every 5-10 sec depending on the rate of change and bandwidth.  This will keep your data up to date and within a 10 second recover point objective.  RPA can also allow you to restore or test your replicated data from any one of the bookmarks created.

Leveraging RPA with VMware LUNs greatly increases the availability of your data upon any maintenance or disaster.  Because RPAs replicate block LUNs, RPAs will replicate LUNs that have datastores formatted on them.

At high overview, to failover a datastore you would:

  1. Initiate a failover on the RPA.
  2. Add the LUNs into an existing storage group in the target site.
  3. Rescan your HBAs in Vsphere O.
  4. Once the LUNs are visible you will notice a new data store available.
  5. Open the datastore and add all the VMs into inventory.
  6. Once all the VMs added configure your networking and power up your machine.

Although this procedure may seem straight forward, your RTO (Recovery Time Objective) will increase.

With VMware Site Recovery Manager (SRM) integration, plug-in the failover procedure can be automated.  With SRM you have the ability to build policies as to which v-switch you want each VM to move to as well as which VM you want to power up first.  Once the policies are built and tested (yes you can test failover), to failover your virtual site you simply hit the failover button and watch the magic happen.

SRM will automate the entire failover process and bring your site online in a matter of a few seconds or minutes depending on the size of your virtual site.  If you are considering replicating your virtual environment, I’d advise considering how long you can sustain to be down and how much data you can sustain to lose.  The use of Recover Point Appliance and Site Recovery Manager can assure that you can achieve your disaster recovery goals.

To Snapshot Or Not To Snapshot? That Is The Question When Leveraging VNX Unified File Systems

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Replication, Security, VMware | No Comments

For those of you who are leveraging VNX Unified File systems, were you aware that you have the ability to checkpoint your file systems?

If you don’t know what checkpoints are, checkpoints are a point-in-time copy of your file system. The VNX gives you the ability to automate the checkpoint process. Checkpoints can run every hour, or any designated length of time, plus keep those files for whatever length of time is necessary (assuming of course that your data center has enough space available in the file system).

Checkpoints by default are read-only and are used to revert files, directories and/or the entire file system to a single point in time.  However, you can create writable checkpoints which allow you to snap an FS, export it, and test actual production data without affecting front-end production. 

VNX Checkpoint also leverages Microsoft VSS: allowing users to restore their files to previous points created by the VNX. With this integration you can allow users to restore their own files and avoid the usual calls from users who have accidently corrupted or deleted their files.  Yet, there are some concerns as to how big snapshots can get. VNX will dynamically increase the checkpoints based on how long you need them and how many you take on a daily basis. Typically the most a snapshot will take is 20% of the file system size and even that percentage is based on how much data you have and how frequently the data changes.

For file systems that are larger than 16TB, accruing successful backup can be a difficult task. With NDMP (network data management protocol) integration you are able to backup the checkpoints and store just the changes instead of the entire file system.

Take note that replicating file systems with other VNX arrays will carry your checkpoints over, giving you an off-site copy of the checkpoint made to the production FS. Backups on larger file systems can become an extremely difficult and time consuming job – by leveraging VNC Replicator and checkpoints you gain the ability to manage the availability of your data from any point in time you choose.

Photo Credit: Irargerich

Part II: How To Create A LUN With EMC Unisphere & Allocate It To An Existing Host

By | EMC, How To | No Comments

Awhile back, I wrote a blog explaining how to create a LUN for ex-Navisphere users – here I will go more in depth with the procedure, as we are binding in this instance. Hence, in this procedure we will be “binding a LUN” from an existing RAID group or pool and allocating it to an existing storage group.

Let’s begin, starting with:  

Logging into Unisphere:

  1. Open IE or another internet explorer application.

  3. Type in the IP of the Control Station or Storage Processor:

              a) http://<IP of array>


[iframe src=”” width=”625″ height=”200″]


     3.    Type your username and password when prompted and click login.       

             a) EMC default sysadmin – sysadmin.


[iframe src=”” width=”625″ height=”500″]


     4.    Select System List and click on the array you want to create a LUN from:


[iframe src=”” width=”625″ height=”375″]


Navigating Unisphere – “Creating a LUN”:

  1. The following Dashboard will appear – results may vary depending on user settings:


[iframe src=”” width=”625″ height=”450″]  

Creating a LUN

  1. Hover the mouse over the Storage tab and select LUN’s:


[iframe src=”” width=”650″ height=”450″]


      2.    Once the following screen appears click on “Create”:


[iframe src=”” width=”650″ height=”350″]


      3.    The following screen will appear: select which “Storage Pool Type” you will be creating the LUN from:  


[iframe src=”” width=”650″ height=”350″]


     4.      Once you select the Storage Pool Type, select the Storage Pool or RAID group you will be binding the LUN to:


[iframe src=”” width=”650″ height=”350″]


     5.       a)  Type in the size of the LUN in the “User Capacity” field.

                b)  Select the ID you want the LUN to have. 

                c)  To commit select “Apply”.

                d) Optionally select “Name” to give your LUN a name instead of an ID.

                e) If you want to create multiple LUN of equal size select “Number of LUN to create”.


[iframe src=”” width=”650″ height=”650″]


OPTIONAL: If you want to specify which FAST tiering policy select the “Advanced” tab, and specify which policy. Note this option can only be configured with LUNs that are in a pool.


[iframe src=”” width=”650″ height=”650″]


     6.    The following message will appear, select “Yes” to proceed, then “Ok” to complete:


[iframe src=”” width=”650″ height=”550″]


[iframe src=”” width=”650″ height=”350″]


 Adding LUNs To Existing Storage Groups

  1. Right-click the LUN and Select “Add to Storage Group” to allocate newly created to an existing:


[iframe src=”” width=”650″ height=”300″]


      2.    Select the “Storage group” you wish to add the LUN to. 

              a) Click on the forward arrow and Click “OK”.

              b) Optionally you can select multiple “Storage Groups” for multiple hosts allocation.


[iframe src=”” width=”650″ height=”450″]


You have now allocated the LUN to your existing host. Refresh your hosts Disk Manager Application to rescan for devices, partition the devices, and create volumes or Datastores using your OS disk provisioning tool.

Photo Credit: oskay

EMC Avamar Virtual Image Backup: File Level Restore Saves Customer from “Blue Screen” Death

By | Avamar, Backup, Disaster Recovery, EMC | No Comments

Recently, a customer of ours had a mission critical Virtual Machine “blue screen.” Yikes! Good news was their environment was leveraging Avamar Virtual Image backups. Bad news was the VM was in an unstable state for a while, and every time the VM was restored it continued to “blue screen.” Therefore, the OS was corrupted—one of the many joys of IT life!

To lose my title of Debbie Downer, let me explain that their environment also was leveraging “FLR” Avamar Virtual File Level Restore. I must say in my experience restoring applications, the data is priority one.

This picture couldn’t have been more beautiful: they had a win 2k8 template with SQL loaded, and they simply extracted the database files from the VMDK backup using FLR and restored them to the new VM, with the data intact and up to date.  Take that tape! Never had to request or load tapes to restore anything 5 years later!

If you are not familiar with EMC Avamar FLR, basically it is the ability to extract single objects out of the virtual image backups. This is done with a proxy agent which exists within your virtual environment that will mount your backups and extract any data that exist within the VM. That means one single backup to your VM and the ability to restore anything within the VMDK without having to load a new VM.

This feature can be used in many ways: one being the dramatic example I just gave, another being the ability to use the data files for testing in other VMs. Although this is just a single feature example of the many abilities of Avamar, its usage will greatly reduce your RPO and RTO.

In my experience, leveraging Avamar and virtual file restore will improve your virtual restoring procedures and bring the peace of mind that your data is within arms reach anytime of the day. As I continue to post about Avamar features and capabilities from the field, I’ve developed this as my slogan for the series: Keep your backups simple … and down with tape!

Photo Credit: altemark

How to Create a LUN on an EMC VNX Array w/ Unisphere (for Ex-Navisphere Users)

By | EMC, Storage | No Comments

As the VNX unified storage array continues to roll out, concerns increase for customers who have known Navisphere for years. For example:

“Where is everything and how the hell do I create a LUN?”

In order to better prepare users for this query and other simple tasks, let’s create a LUN from an existing RAID group.

Let’s start by logging in and selecting the system to be configured:[image title=”Step One” size=”small” align=”center” width=”500″ height=”300″][/image]

Once you select your array you will see the following display:
[image title=”2″ size=”small” align=”center” width=”500″ height=”300″][/image]

Best practice for maneuvering around Unisphere is to use the main tabs at the top of the interface: [image title=”3″ size=”small” align=”center” width=”600″ height=”50″][/image]

Now there are two ways we can create a LUN: by highlighting the “Storage” tab and selecting LUN’s as seen below, or by right clicking the RAID you desire from the Pools/RAID Group page. The best way to allocate a LUN would be to select the “Pool/RAID Groups” page under the “Storage” tab, which is one of the task panes illustrated below: [image title=”4″ size=”small” align=”center” width=”500″ height=”300″][/image]

Be sure to select the RAID group or storage Pool you wish to create a LUN from:
[image title=”5″ size=”small” align=”center” width=”500″ height=”300″][/image]

As seen below you can tell how much storage is available via a bar graph and free capacity colum—that means no more right clicking on each group to find out how much it can create. [image title=”6″ size=”small” align=”center” width=”500″ height=”300″][/image]

Now right click the RAID group that you wish to create storage from and select “Create LUN”: [image title=”7″ size=”small” align=”center” width=”500″ height=”300″][/image]

This screen has not changed much, the only difference being that now you can name your LUN ahead of time as in the example below. We will be creating two LUN’s at 250GB each, entitled Exchange 2010 DB with a starting ID of 00. This will create two LUN’s with the name Exchange 2010 DB_0 and Exchange 2010 DB_1, then just click apply and done. [image title=”8″ size=”small” align=”center” width=”500″ height=”300″][/image]

Below is the RAID Groups pane. You will have a details pane for each RAID group you select. Highlight all the LUN’s you wish to allocate to a storage group and select “Add to storage group.” [image title=”9″ size=”small” align=”center” width=”500″ height=”300″][/image]

Assuming you have already created a storage group, simply select the storage group and hit the forward arrow. Once you hit “OK” you have successfully created a LUN and added it to an existing storage group. [image title=”10″ siz=”small” align=”center” width=”500″ height=”300″][/image]

I hope this tutorial has helped clear up some of your questions. Stay tuned as we continue to create FAST pools and monitor FAST Tiering.

Photo Credit: oskay

Leveraging EMC,VNX, & NFS To Work For Your VMware Environments #increasestoragecapacity

By | Deduplication, EMC, Replication, Storage, Virtualization, VMware | No Comments

Storage Benefits
NFS (Network File System) is native to UNIX and Linux file systems. Because the NFS protocol is native to UNIX and Linux, it allows the file system to be provisioned thin instead of thick, with ISCSI or fiber channel. Provisioning LUN’s or datastores thin, allows the end user to efficiently manage their NAS capacity. Users have reported a 50% increase in both capacity and usable space. 

Creating NFS datastores is a lot easier to attach to hosts than FC or ISCSI. There is no usage of HBA’s or fiber channel fabric, and all that needs to be created is a VMkernel for networking. NAS and SAN capacity can quickly become scarce if the end user can’t control the amount of storage being used, or if there are VM’s with over provisioned VMDK’s. NFS file systems can also be deduplicated, and not only are user’s saving space via thin provisioning, the VNX can track similar data and store only the changes to the file system. 

EMC and VMware’s best practice is to use deduplication on NFS exports which house ISO’s, templates and other miscellaneous tools and applications. Enabling deduplication on file systems which house VMDK’s is not a best practice due to the fact that the VMDK’s will not compress. Automatic volume manager can also stripe the NFS volumes across multiple RAID groups (assuming their array was purchased with more than just 6 drives). This increases the I/O performance of the file system and VM. Along with AVM extending the datastores, this makes the file system transparent and beneficial to VMware (assuming you are adding drive to the file system). AVM will extend the file system to the next available empty volume, meaning if you add drives to the file systems you will be increasing the performance of your virtual machines. 

Availability Benefits
Using VNX, Snapsure snapshots can be taken of the NFS snapshots and mounted anywhere in both physical and virtual environments. NFS Snapshots will allow you to mount production datastores in your virtual environment to use them for testing VM’s without affecting production data. Leveraging SnapSure will allow the end-user to keep up with certain RTO and RPO objectives. SnapSure can create 96 checkpoints and 16 writable snapshots per file system. Not to mention the ease of use Snapsure has over SnapView. Snapsure is configured at the file system level, just right-click the file system, select how many snapshots you need, add a schedule and you’re finished. 

From my experience in the field the end-user finds this process much easier than SnapView or replication manager. Using VNX, NFS will also enable the user to replicate the file system to an offsite NS4-XXX without adding any additional networking hardware. VNX Replicator allows the user to mount file systems on other sites without affecting production machines. Users can replicate up to 1024 file systems, and 256 active sessions. 

Networking Benefits
VNX datamovers can be purchased with 1 GB/s or 10 GB/s NICs. Depending on your existing infrastructure, the VNX can leverage LACP or ether channel trunks to increase the bandwidth and availability of your NFS file systems. LACP trunks enable the datamover to monitor and proactively reroute traffic from all available NIC’s in the Fail Safe Network, therefore increasing storage availability. It has been my experience interacting with customers who are leveraging 10GB on NFS, that they have seen a huge improvement in R/RW to disk and storage, as well as VMotion from datastore to datastore with up to 100% bandwidth and throughput.

Photo Credit: dcJohn

EMC VSI Plug-in To The Rescue! Saving You From Management via Pop Up Screens (#fromthefield)

By | Clariion, EMC, Networking, Virtualization, VMware | No Comments

Most administrators have multiple monitors so that they can manage multiple applications with one general view. Unfortunately, what ends up happening is that your monitors start looking like a pop up virus—a window for storage, a window for networking, a window for email and a window for Internet.

EMC and VMware brought an end managing storage to your virtual environment. If you haven’t heard already, EMC has released new VMware EMC storage plug-ins. Now I don’t know about you, but as a consultant and integrator I can tell you mounting NFS shares to VMware is a bit of a process. If you’re not familiar with either Celerra or Virtual provisioning, adding NFS storage can be a hassle, no doubt.

1. You have to create the interfaces on the Control Station.
2. Create a file system.
3. Create NFS export and add all host to root and access boxes.
4. Create a Datastore.
5. Scan each host individually until storage appears in every host.


EMC VSI unified storage plug-in will allow you to provision NFS storage from your Celerra right from Virtual center client. The only thing that needs to be completed ahead of time is the DataMover interfaces. Once you configure the interfaces, you’ll be able to provision NFS storage from you Virtual Center client. When you are ready to provision storage download and install the plug-in and NaviCLI from your Powerlink account, open your Virtual client, right click your host, select EMC->provision storage and the wizard will take care of the rest. When the wizard asks for an array, select either Celerra or Clariion (if you select Celerra you will need to enter the root password). The great thing about the plug-in is that it allows VMware administrators the ability to provision storage from your VC interface.

The EMC VSI pool management plug-in allows you to manage your block-level storage and your VC client as well. We all know the biggest pain is having to rescan each host over and over again just so they each see the storage. Congratulations! The VSI pool management tool allows you to both provision storage and scan all HBA’s in the Cluster, all with just a single click. With EMC storage viewer locating your LUN, volumes are just as easy. Once installed, the storage viewer will allow having a full view into your storage environment right from your VC client.

In summary, these plug-ins will increase your productivity and give some room back to your monitors. If you don’t have Powerlink accounts sign up for one at It’s easy to sign up for and will have more information on how to manage VMware and EMC products.

Hope you have enjoyed my experience from the field!

Photo Credit:PSD