Category

Clariion

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:

SnapMirror

  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.

RecoverPoint

  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk

EMC VSI Plug-in To The Rescue! Saving You From Management via Pop Up Screens (#fromthefield)

By | Clariion, EMC, Networking, Virtualization, VMware | No Comments

Most administrators have multiple monitors so that they can manage multiple applications with one general view. Unfortunately, what ends up happening is that your monitors start looking like a pop up virus—a window for storage, a window for networking, a window for email and a window for Internet.

EMC and VMware brought an end managing storage to your virtual environment. If you haven’t heard already, EMC has released new VMware EMC storage plug-ins. Now I don’t know about you, but as a consultant and integrator I can tell you mounting NFS shares to VMware is a bit of a process. If you’re not familiar with either Celerra or Virtual provisioning, adding NFS storage can be a hassle, no doubt.

1. You have to create the interfaces on the Control Station.
2. Create a file system.
3. Create NFS export and add all host to root and access boxes.
4. Create a Datastore.
5. Scan each host individually until storage appears in every host.

 Whew!

EMC VSI unified storage plug-in will allow you to provision NFS storage from your Celerra right from Virtual center client. The only thing that needs to be completed ahead of time is the DataMover interfaces. Once you configure the interfaces, you’ll be able to provision NFS storage from you Virtual Center client. When you are ready to provision storage download and install the plug-in and NaviCLI from your Powerlink account, open your Virtual client, right click your host, select EMC->provision storage and the wizard will take care of the rest. When the wizard asks for an array, select either Celerra or Clariion (if you select Celerra you will need to enter the root password). The great thing about the plug-in is that it allows VMware administrators the ability to provision storage from your VC interface.

The EMC VSI pool management plug-in allows you to manage your block-level storage and your VC client as well. We all know the biggest pain is having to rescan each host over and over again just so they each see the storage. Congratulations! The VSI pool management tool allows you to both provision storage and scan all HBA’s in the Cluster, all with just a single click. With EMC storage viewer locating your LUN, volumes are just as easy. Once installed, the storage viewer will allow having a full view into your storage environment right from your VC client.

In summary, these plug-ins will increase your productivity and give some room back to your monitors. If you don’t have Powerlink accounts sign up for one at www.powerlink.emc.com. It’s easy to sign up for and will have more information on how to manage VMware and EMC products.

Hope you have enjoyed my experience from the field!

Photo Credit:PSD

The Effects of Random IO on Disk Drive Performance

By | Clariion, Storage | No Comments

I recently had the opportunity to review some performance data from one of our client’s EMC Clariion arrays. I was specifically looking at the read performance of the disk drives during their backup window. I discovered a great visual example showing the effect of random IO on disk drive IOPS and throughput.

The below graph depicts the following metrics:

  • Disk drive seek distance (GB)– green line, scale on the right
  • Disk drive total IO (IOPS) – black line, scale on the left
  • Disk drive total throughput (MB/s) – red area, scale on the right

Zone 1 – Sequential

  • Seek Distance low, less than 1 GB
  • High total IO, 200-275 IOPS
  • High total disk throughput, about 10-13 MB/s

Zone 2 – Getting Random

  • Seek Distance high, 2-6 GB
  • Lower total IO, 25-100 IOPS
  • Lower disk throughput, less than 3-4 MB/s

Zone 3 – Random

  • Seek Distance High, greater than 9 GB
  • Low total IO, less than 25 IOPS
  • Low disk throughput, 1 MB/s
float(1)