Category

EMC

Bringing Sexy Back! With Cisco, VMware and EMC Virtualization

By | Cisco, EMC, Virtualization, VMware | No Comments

Yeah I said it: “IDS just brought Sexy Back!”

For a refresh a recent customer sought to finally step into the Virtual Limelight. This particular customer, whose vertical is in the medical industry; purchased four Cisco Chassis and eleven B200 blades.  Alongside the Cisco server they purchased an EMC VNX 5500 OE Unified Array with two Cisco MDS 9148 FC switches.

Our plan was to migrate over one hundred Virtual Machines onto fifteen physical ESX hosts to the new Cisco/VMware 5.0 environment.

Once we successfully moved the VM’s over we began virtualizing the remaining physical hosts. Now the reality is that not all hosts could be moved so abruptly, thus we are still in the process of converting the hosts. However, by just moving the ESX hosts and ten physical servers our client is already seeing tremendous drops in power usage, server management and data center capacity.

Here is what we started with, otherwise know as the “before sexy”:

A picture is worth a thousand words, so let me just show you exactly what “sexy” looks like in their current data center:

The moral of the story is not to dive head first into centralized storage and virtualization, but to consider what it costs to manage multiple physical servers with applications that under-utilize your hardware. Also good to keep in mind is what is costs to keep those servers operational (power/cooling) and maintained. If you don’t know what these figures look like, or how to bring sexy back into your data center – just ask me, resident Justin Timberlake over here at IDS.

Photo Credit: PinkMoose

Integrating EMC RecoverPoint Appliance With VMware Site Recovery Manager

By | Disaster Recovery, EMC, How To, Virtualization, VMware | No Comments

For  my “from the field” post today, I’ll be writing about integrating EMC RecoverPoint Appliance (RPA) with VMware Site Recovery Manager (SRM). However, before we dive in, if you are not familiar with RPA technology, let me explain first with a high overview:

RPAs are a block LUN IP based replication appliance. RPAs are zoned via FC with all available storage ports.  RPAs leverage a “Replication Journal” to track changes within a LUN, once the LUNs have fully seeded between the two sites, the journal log will only send changed deltas over the WAN.  This allows you to keep your existing WAN link and not spend more money on WAN expansion.  The RPA’s use of the journal log allows it to efficiently track changes to the LUNS and replicate the differences over the WAN.  Because RPA can track the changes to the LUNs it can create a Bookmark every 5-10 sec depending on the rate of change and bandwidth.  This will keep your data up to date and within a 10 second recover point objective.  RPA can also allow you to restore or test your replicated data from any one of the bookmarks created.

Leveraging RPA with VMware LUNs greatly increases the availability of your data upon any maintenance or disaster.  Because RPAs replicate block LUNs, RPAs will replicate LUNs that have datastores formatted on them.

At high overview, to failover a datastore you would:

  1. Initiate a failover on the RPA.
  2. Add the LUNs into an existing storage group in the target site.
  3. Rescan your HBAs in Vsphere O.
  4. Once the LUNs are visible you will notice a new data store available.
  5. Open the datastore and add all the VMs into inventory.
  6. Once all the VMs added configure your networking and power up your machine.

Although this procedure may seem straight forward, your RTO (Recovery Time Objective) will increase.

With VMware Site Recovery Manager (SRM) integration, plug-in the failover procedure can be automated.  With SRM you have the ability to build policies as to which v-switch you want each VM to move to as well as which VM you want to power up first.  Once the policies are built and tested (yes you can test failover), to failover your virtual site you simply hit the failover button and watch the magic happen.

SRM will automate the entire failover process and bring your site online in a matter of a few seconds or minutes depending on the size of your virtual site.  If you are considering replicating your virtual environment, I’d advise considering how long you can sustain to be down and how much data you can sustain to lose.  The use of Recover Point Appliance and Site Recovery Manager can assure that you can achieve your disaster recovery goals.

The Shifting IT Workforce Paradigm: From Sys Admin to Capacity Planners

By | Cloud Computing, EMC, Virtualization | No Comments

We talk about a lot of paradigm shifts in IT. The shift to a converged network, the shift to virtualization, etc. There is a more important shift happening however, that we aren’t talking nearly enough about. The absolutely necessary shift in the people who make up our IT work force.

The IT field as a whole is at or will soon be approaching one of those critical points in our developed skill curve where today’s critical skills are going to be all but obsolete. Similar to the sudden onset of open systems after the mainframe realized the end of its dominance in the datacenter. We had no one who knew how to operate and tune these new systems, and it kept the adoption curve somewhat slow while that was resolved through re-training and an influx of workers who’d had exposure to UNIX through their college education.

We’re at that stage again, or pretty near approaching it. The concept of the “private cloud” is going to stall soon, I believe, not because the technology doesn’t work, and not because it isn’t useful, but because we don’t have people in IT who are trained to deal with it. Let’s be very clear – this isn’t a tools issue. I’ve written about the tools problem we have with private cloud in the past, but this is different. This issue is actually much harder to resolve because it isn’t as simple as taking an employee who is used to CICS commands and teaching them Solaris commands to use instead. This requires a different mindset, a different way of thinking about IT and a realization that the value of the IT worker is not in how well they can script a complex set of commands, but in harnessing the power of the information they ultimately control.

“Private Cloud” is not about a technology. It is about creating an agile utility the business can use any way they need anytime they want. It is about getting out of the business of clicking Agree, Next, Next, Next, Next, Finish and getting into the business of strategic capacity management and information analytics. This involves skills most IT people either don’t have or aren’t allowed to use, because they are currently machine managers, rack and stack specialists, and uptime wizards. These new skills require less mechanical action and more interaction with the business. We need to shift from being simply systems administrators to capacity planners (and more).

I’ve been a capacity planner and a systems engineer in IT departments. They’re different jobs, entail different ways of thinking, require different levels of interaction with the business, and don’t have a lot of crossover in skill sets other than a fundamental knowledge of how systems work. I’ve talked to several customers and prospects about this, and they all seem to recognize a skills train is headed toward them, but they don’t have any idea how big the train is, what direction it’s traveling, whether it left Chicago or Philadelphia, or how to get on it.

There are a few folks out there who seem to realize what is happening and they’re trying to get in front of it. Although EMC is wrapping the concept all around their over-hyped, buzz-centric use of the word “cloud”, they are offering some new courses within their Proven Professional program that seem to grasp the shift. I’ve seen a few seminar fliers come through my mail that might hit the mark. The problem is they’re all skimming the surface. We need some fundamental changes at the University level and perhaps a change away from technology focus in the whole certification thinking to accelerate the paradigm shift.

I’m interested in comments here. Is your organization training you to be the most useful asset you can be to the business in this shift or are they taking the new technology and keeping you in your same role? Are there new educational opportunities I’m not seeing in other parts of the country to help us move from system administrators to business capacity analysts?

Let me know.

Photo Credit: BiblioArchives/LibraryArchives

Fun With DART’s & Replication Between A New VNXe And A Celerra NS-120

By | EMC, How To, Replication | No Comments

A couple of weeks ago I had some fun configuring a new VNXe to replicate with a Celerra NS-120. Here is a transcript of how I got it to work and some of the oddities I encountered:

1. I started out by checking the ESM whose configuration is supported according to the Support Matrix, as long as the Celerra is running DART 6.0. I then upgraded the NS20 (CX3-10 back-end running latest [and last] R26 code) to 6.0.41.

2. Moving along, I set up the interconnects using the “Unisphere” Hosts Replication Connection wizard. I validated the interconnects using nas_cel -interconnect -list on the NS20.

3. I had some initial issues with routing that were quickly resolved and the interconnects looked good to go.

4. This is where it gets dicey: I started out using the wizard on the NS20 Unisphere to replicate a filesystem. Apparently, the NS20 can’t set up replication destination storage and doesn’t seem to be able to enumerate/read the remote filesystem names.

I was able to see a list of remote filesystem IDs though, so this started me thinking : what if I could login to the remote “Celerra” (read, DART instance on the VNXe) to decode which filesystem correlated to which ID, i.e. run nas_fs -list ?

I tried SSH’ing to the VNXe and saw that it was shut down, so I started poking around in the service options and realized that I could enable SSH. I did that – SSH’ed to the box and logged in as “service”, because admin didn’t work. From there, I SU’d to nasadmin and was prompted for the root password. I tried nasadmin, the service password and a couple other passwords I knew of, but it timed out after three tries. However, I was in a nasadmin context so I ran the nas_fs -list command and got precisely what I was looking for – the list of filesystem ID’s to filesystem names.

5. Time services – for replication to work, you have to be within ten minutes (preferably five) of the respective datamovers. I thought I would proactively double-check and set the NTP on the VNXe “server_2” – however I was shut down, because that requires root permissions. Luckily time was pretty close so I was good there (NOTICE: the datamover was set to UTC – probably by design, but required conversion to local time).

6. By this time I realized that using the Celerra Manager/Unisphere/Wizards were not likely to work, so I logged on to the NS20 and ran the nas_replicate -create FS_Name -source -fs -id=552 -destination -fs id=30 -interconnect id=20003 -max_time_out_of_sync 10 command which erred out after a few minutes.

I did some digging on iView and found the primus article emc263903, which referenced logging in as root to run the command. Ok, I have the NS20 root password, so I did that and got the error message “destination filesystem not read-only”. I had created the “Shared Folder” (otherwise known to us old timers as a “File System”) as a replication target – don’t you think that if you are creating a replication target that the wizard would mount it as read-only?

7. Ok, back on the VNXe through SSH as nas admin: I run server_unmount and am prompted for the root password again, it errors out three times and then check – it’s unmounted! I run server_mount with -o ro, get prompted for the root password and error out three more times.

8. Back on the NS20, I re-run the nas_replicate command and it errors, this time with a “destination not empty” message. I used the -overwrite option, because when I provisioned the destination filesystem the minimum size that the wizard presented for the save was the same size as the destination filesystem …

Finally success: the filesystem is replicating!

Photo Credit: calypso_dayz

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:

SnapMirror

  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.

RecoverPoint

  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk

Part II: How To Create A LUN With EMC Unisphere & Allocate It To An Existing Host

By | EMC, How To | No Comments

Awhile back, I wrote a blog explaining how to create a LUN for ex-Navisphere users – here I will go more in depth with the procedure, as we are binding in this instance. Hence, in this procedure we will be “binding a LUN” from an existing RAID group or pool and allocating it to an existing storage group.

Let’s begin, starting with:  

Logging into Unisphere:

  1. Open IE or another internet explorer application.
  2.  

  3. Type in the IP of the Control Station or Storage Processor:

              a) http://<IP of array>

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image601.jpg” width=”625″ height=”200″]

 

     3.    Type your username and password when prompted and click login.       

             a) EMC default sysadmin – sysadmin.

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image602.jpg” width=”625″ height=”500″]

 

     4.    Select System List and click on the array you want to create a LUN from:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image603.jpg” width=”625″ height=”375″]

 

Navigating Unisphere – “Creating a LUN”:

  1. The following Dashboard will appear – results may vary depending on user settings:

 
 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image604.jpg” width=”625″ height=”450″]  

Creating a LUN

  1. Hover the mouse over the Storage tab and select LUN’s:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image605.jpg” width=”650″ height=”450″]

 

      2.    Once the following screen appears click on “Create”:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image606.jpg” width=”650″ height=”350″]

 

      3.    The following screen will appear: select which “Storage Pool Type” you will be creating the LUN from:  

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image607.jpg” width=”650″ height=”350″]

 

     4.      Once you select the Storage Pool Type, select the Storage Pool or RAID group you will be binding the LUN to:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image608.jpg” width=”650″ height=”350″]

 

     5.       a)  Type in the size of the LUN in the “User Capacity” field.

                b)  Select the ID you want the LUN to have. 

                c)  To commit select “Apply”.

                d) Optionally select “Name” to give your LUN a name instead of an ID.

                e) If you want to create multiple LUN of equal size select “Number of LUN to create”.

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image609.jpg” width=”650″ height=”650″]

 

OPTIONAL: If you want to specify which FAST tiering policy select the “Advanced” tab, and specify which policy. Note this option can only be configured with LUNs that are in a pool.

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image610.jpg” width=”650″ height=”650″]

 

     6.    The following message will appear, select “Yes” to proceed, then “Ok” to complete:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image611.jpg” width=”650″ height=”550″]

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image612.jpg” width=”650″ height=”350″]

 

 Adding LUNs To Existing Storage Groups

  1. Right-click the LUN and Select “Add to Storage Group” to allocate newly created to an existing:

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image613.jpg” width=”650″ height=”300″]

 

      2.    Select the “Storage group” you wish to add the LUN to. 

              a) Click on the forward arrow and Click “OK”.

              b) Optionally you can select multiple “Storage Groups” for multiple hosts allocation.

 

[iframe src=”http://www.integrateddatastorage.com/wp-content/uploads/2011/09/Image614.jpg” width=”650″ height=”450″]

 

You have now allocated the LUN to your existing host. Refresh your hosts Disk Manager Application to rescan for devices, partition the devices, and create volumes or Datastores using your OS disk provisioning tool.

Photo Credit: oskay

EMC Avamar Virtual Image Backup: File Level Restore Saves Customer from “Blue Screen” Death

By | Avamar, Backup, Disaster Recovery, EMC | No Comments

Recently, a customer of ours had a mission critical Virtual Machine “blue screen.” Yikes! Good news was their environment was leveraging Avamar Virtual Image backups. Bad news was the VM was in an unstable state for a while, and every time the VM was restored it continued to “blue screen.” Therefore, the OS was corrupted—one of the many joys of IT life!

To lose my title of Debbie Downer, let me explain that their environment also was leveraging “FLR” Avamar Virtual File Level Restore. I must say in my experience restoring applications, the data is priority one.

This picture couldn’t have been more beautiful: they had a win 2k8 template with SQL loaded, and they simply extracted the database files from the VMDK backup using FLR and restored them to the new VM, with the data intact and up to date.  Take that tape! Never had to request or load tapes to restore anything 5 years later!

If you are not familiar with EMC Avamar FLR, basically it is the ability to extract single objects out of the virtual image backups. This is done with a proxy agent which exists within your virtual environment that will mount your backups and extract any data that exist within the VM. That means one single backup to your VM and the ability to restore anything within the VMDK without having to load a new VM.

This feature can be used in many ways: one being the dramatic example I just gave, another being the ability to use the data files for testing in other VMs. Although this is just a single feature example of the many abilities of Avamar, its usage will greatly reduce your RPO and RTO.

In my experience, leveraging Avamar and virtual file restore will improve your virtual restoring procedures and bring the peace of mind that your data is within arms reach anytime of the day. As I continue to post about Avamar features and capabilities from the field, I’ve developed this as my slogan for the series: Keep your backups simple … and down with tape!

Photo Credit: altemark

Isilon Storage (Still) Supports “Block Headed” IT

By | EMC, Isilon, Storage | No Comments

Call me crazy, but imagine this: a high performance, highly available storage system that uses off-the-shelf components and no RAID. And add to that ease of use and the ability to scale to petabytes in minutes. On top of that, throw in some features like snapshots, site-to-site replication, and intelligent auto-tiering. Seeing it yet?

The picture I’m painting is a system that has NetApp throwing stones and EMC wondering what’s next for Celerra, now that it has spent $2.25 billion on keeping it out of NetApp’s hands. If you haven’t figured it out yet, I am thoroughly enamored with our friends at Isilon:)

Okay, so I asked you to imagine a highly available system without RAID—yet, not without data protection. Isilon’s scale-out architecture distributes incoming writes across nodes mirroring what they deem small files (less than 128KB) and striping files 128KB and larger in 16KB chunks across each node in the grid with parity.

Here is where it gets interesting, though: Protection can be set on a block and node protection basis, so you can define how many drives or nodes you want to be able to survive losing. When you lose a drive or a node, the grid rebuilds from parity across free space on the remaining nodes.

But that’s only the first reason for my enamoredness (yep, it’s a word) …

I’m a dyed-in-the wool “Block Head,” and I know Isilon is a NAS platform, but they really got my attention with the OneFS OS. They handle all the major protocols: NFSv2, v3, and v4, pnfs, FTP, HTTP, and CIFS/SMB. Authentication support for LDAP, NIS, Active Directory and local users and groups are all supported. For backup: NDMP is supported, directly to tape through a backup accelerator node or across the front-end Ethernet via 3-Way NDMP.

For us “Block Heads,” Isilon supports iSCSI—after all, a LUN is just a file, right?

Cabling

144 Node Isilon Cluster

Photo Credit: Paul Stevenson

How To: Migrating VMware ESX 2.5 Datastores with EMC SANCopy #fromthefield

By | EMC, How To, Storage, VMware | No Comments

I know this is a little old-school, but since I didn’t find a good reference anywhere online … I am going to cover migrating VMWare ESX 2.5 datastores with EMC SANcopy in this month’s installment. If you find yourself in this situation—you will know that it works and how it’s done. I recently ran through this migrating from an EMC Clariion CX-300 to a new VNX5300.

High-level, here are the steps to migrate an ESX 2.5 Datastore with SANCopy
1. Document, Document, Document
2. Setup SANCopy
3. Shut down Guests and Host
4. Start Copy Sessions
5. Reconfigure host to access new array
6. Restart Host and then Guest

This how-to makes some assumptions.
– You know your way around Navisphere and Unisphere
– You know your way around the VMWare ESX 2.5 management interfaces

1. Document, document, document. I can’t stress this enough; it’s key to know exactly how things are set up, because it’s essential that we make it look the same on the target system. It is invaluable to have an up to date CAP report or Array Configuration report. The goal is to make as little change for the ESX 2.5 host as possible and the more you record upfront, the easier your life will be. The key things to record for each of the attached Datastores are the following:
– Source LUN ID
– Host LUN Id (this is configured and shown in the Storage Group LUN properties)
– Owning SP (this is in LUN properties)
– Size in GB (If this is not an integer value record the number of blocks in LUN properties)
– LUN WWN (this is in LUN properties, but the best place to get this is from a CAP report – because you can copy and paste)

Here’s a sample table you can use to record the data:

2. Setup SANCopy. In order to run SANCopy at least one of the arrays needs to have the SANCopy license enabler installed. It makes the process significantly easier if the enabler is installed on both arrays, but at a minimum it should be installed on one array, preferably the target array. If the enabler is installed on both arrays, both arrays will need to be in the same Navisphere management domain. Once the enabler(s) are installed, zone the front end ports on each array to the front end ports on the partner array (i.e. SPA0 on source to SPA0 and SPB1 on target, and so forth). When the FE ports are zoned, create a SANCopy Storage Group on the source array and update the SANCopy connections on both arrays. The initiators should show up and register automatically on the source array.

The next part is to setup the SANCopy sessions. Put each of the source LUNs to be replicated into the SANCopy Storage group and then run the SANCopy wizard in order to create a session for each LUN. The wizard walks you through selecting the SANCopy array (target array), Source LUN (use the enter WWN and paste the LUN WWN in the entry field), Target storage device (right-click and select target storage), and the session details. I find it valuable to set the name of the session to something descriptive, like the source LUN name, size, etc. If both arrays have the SANCopy enabler installed the selection process is a little more intuitive as you can see the LUN names, size, etc.

3. Shut down Guests and VMWare Host. SANCopy is an array-based block copy that copies all blocks in a LUN sequentially to a target LUN. In order to get a consistent useable copy the host cannot access the source or target for the duration of the copy. Ideally, when shutting down the VMWare guest machines, set the automatic restart value to disabled so they don’t try to start when the new target datastores are mounted when VMWare starts.

4. Start SANCopy Sessions. By default SANCopy sessions have a throttle value of 6—I typically change this to 8 or 10 in a migration effort for best throughput. This is equivalent to setting a LUN migration to High. If you are going to start multiple simultaneous migrations, you may need to adjust the Active Sessions per SP from 2 to 4. Start the sessions and monitor until all sessions complete.

5. Reconfigure host to access new array. Zone the host to the new array making sure to disable or delete the zones to the old array. Power on the host and interrupt the boot post cycle at the HBA BIOS screen. At this point the HBA should have logged in to the target array—if it has not, re-check zoning and/or use the HBA BIOS utility to rescan targets. Once the HBA is logged in to the array, use the Connectivity Status tool to manually register the host connections. When the host HBA connections are registered, create a Storage Group adding the LUNs and paying careful attention to setting host LUN id values to match the source array configuration. Add the host to the storage group and exit the HBA BIOS utility and reboot.

6. Restart Host and then Guests. When the host powers up verify that the datastores are visible and browsable using the VMWare Client utility. If everything looks good power on the guests and you are done.

A couple of thoughts … this example was migrating between two Clariion arrays, but SANCopy also works for migrating data from HDS arrays, HPQ arrays and other SAN platforms to a Clariion—as long as the LUNs are identifiable by LUN WWN or storage array Port WWN and LUN id.

Photo credit: alex.ragone via Flickr

In a Roomful of Award Winners at EMC World, IDS Is The Last Partner Standing

By | EMC, Uncategorized | No Comments

Two weeks ago, I had the honor of representing IDS at EMC World to receive our Velocity Services Quality Award. I say honor because this is one of those awards that really matters (in my mind) because it is solely based on Customer Feedback—the thing that ultimately drives our business. A little background on the award for anyone who is curious:

[framed_box bgColor=”#EFEFEFEF” rounded=”true”]Several years ago, EMC implemented a program called the Authorized Services Network (ASN). There were hundreds of resellers in North America certified to sell EMC, but only a select handful could qualify to be ASN-certified and actually perform EMC implementations for their customers. This program requires rigorous testing of multiple Pre-Sales and Post-Sales Engineers to prove that the company is dedicated to not just selling EMC equipment, but providing their customers the highest level of service with their engineering expertise.

Back in 2007, EMC decided to recognize the best of the best by creating an ASN Quality Award for the top implementation partner in North America, based completely on customer feedback. After a reseller performs an implementation for a customer, that customer receives a third-party survey asking how the implementation went, would they use the reseller again, would they recommend them to peers in the industry, etc. Based on those responses, the ASN Partners were ranked and IDS finished at the top of the list, receiving EMC’s first ever ASN Quality Award.

In 2008, EMC decided to open up the Award a bit and presented the award to two partners. In subsequent years, a few more Partners made the list as well. Fast forward to 2011. EMC changed the name of the award to the Velocity Services Quality (VSQ) Award but the concept is exactly the same.

This year, 14 partners received the honor at EMC World for their dedication to engineering excellence and customer satisfaction. They started the awards by naming the first-time winners, then two-time winners, etc. At the tail-end was IDS being announced as the only five-time winner of the prestigious award. To be named as the #1 Partner for the largest storage manufacturer in the world based on entirely Customer Satisfaction is a huge honor and I was proud to be there accepting on behalf of the IDS team.[/framed_box]

First off, I would like to say thank you to our customers. Your dedication to IDS and the services that we provide is what makes us great. We appreciate the long-term business Partnerships and look forward to many more years of joint prosperity.

To our Engineers: thank you for making this award possible! You work long hours at customer sites, study technical materials at night to keep your expertise at the highest possible level, and frequently spend time away from your families supporting the customers that ultimately give us these high marks. We appreciate everything that you do and our customers do as well. You are the lifeblood of this organization and we appreciate everything that you do.

And finally, to the other VSQ Award Winners this year, congratulations. It is an elite group to be in and I can appreciate all of the hard work that it takes to achieve this level of accomplishment. I look forward to seeing you at the award ceremony for many years to come … and, of course, always being the last man standing.

float(5)