Unstructured Data, the silent killer of your IT budget

Unstructured Data, the Silent Killer of Your IT Budget: How are you stopping the bleeding?

By | Backup, Replication, Storage | No Comments

Like most organizations, you probably are hosting your unstructured data on traditional NAS platforms. The days of storing this data on these legacy systems are coming to an end. Let’s look at some of the setbacks that plague traditional NAS:

  • Expensive to scale
  • Proprietary data protection – third-party backup software is needed to catalog and index
  • Inability to federate your data between disparate storage platforms onsite or in the cloud
  • High file counts, which can cripple performance, increase back-up windows, and require additional flash technology for metadata management
  • File count limitations
  • High “per-TB” cost
  • Some platforms are complex to administer
  • High maintenance costs after Year 3

Read More

Citrix performance tuning part 1

Performance Tuning Citrix XenApp and XenDesktop 7.6 Part I: Citrix CloudBridge

By | Cloud Computing, Data Center, Replication, Storage | No Comments

As many companies are making the investment in Citrix XenApp and XenDesktop they want a fast, reliable and secure solution. In this article, we will focus on WAN optimization and performance using the Citrix CloudBridge technologies.

Citrix CloudBridge is a unified platform used to accelerate applications across public and private networks, increasing performance and user experience. CloudBridge offers ICA protocol acceleration, QoS, optimization and security for XenApp and XenDesktop. This is an optimal solution for remote or branch offices that have WAN performance issues. CloudBridge offers extensive monitoring and reporting features, which help IT staff to performance tune any Citrix environment.
Read More

RecoverPoint 4.0 Review: Huge Improvements On An Already Great Product

By | EMC, Replication | No Comments

recoverpoint 4 - 1

I’m sure that most people reading this already know at least a little bit of how RecoverPoint works—and probably even know about some of the new features in 4.0. I’ll do a short review on how it works, and then dive into a review the new features.

RecoverPoint Replication: A Refresher

For those that are familiar with different replication technologies, but not RecoverPoint: let me just say that it is, in my humble opinion, the best replication product on the market right now for block data. This doesn’t just go for EMC arrays; RecoverPoint can be used with any supported array (HDS, NetApp, IBM, 3PAR, etc) behind an EMC VPLEX.

Prior to RecoverPoint, you would need to use either MirrorView or SANCopy for an EMC CLARiiON to replicate data between arrays, and SRDF for a Symmetrix/VMAX. These technologies are comparable to other vendors current replication technologies. Typically, most replication technologies have the ability to be synchronous or asynchronous, and the same goes for RecoverPoint. The big difference is in the rollback capability. Other technologies require the use of clones and/or snapshots to be able to recover from more than one point in time.

The image below shows an example of the difference between RecoverPoint, backups, and snapshots: almost any point in time to choose to recover from with RecoverPoint, versus very few recovery points for snapshots or backups. This is very commonly referred to by EMC as a “DVR like functionality.”

recoverpoint 4 rpo - 2

The other feature to talk about when talking about replication products is how to test your DR copies so you can be sure your failover will work in case you need to use it. With RecoverPoint it is a super easy point and click GUI (you can use the CLI if you really want to) to test a copy.

RecoverPoint 4.0: Review of the New Features

A Completely New GUI

RecoverPoint has changed from an application to a web-based client. From my experience, it isn’t quite as intuitive as the old version. A screenshot of the new GUI is below.

recoverpoint 4 gui - 3

Deeper Integration with VMware Site Recovery Manager

There is now the ability to test or failover to any point in time. This is a huge change, previously with SRM you would only be able to use the latest copy, so the main advantage of RecoverPoint (almost any point in time) wasn’t there when integrated with SRM.

Virtual RPAs

Virtual machines running RecoverPoint software. Sounds like a really good and neat idea, but very limited on functionality. The two biggest limitations – only available when used with iSCSI (hosts can be connected via FC though, but be careful as EMC doesn’t support the same LUN accessed by both FC and iSCSI at the same time) and only available with RP/SE (RP/SE is the VNX only license variant of RecoverPoint). Also the performance of these vRPAs depends on the amount of resources you give them.

Synchronous Replication Over IP

If you have a fast enough IP WAN connection you can now use synchronous mode via IP. The benefit to this is obvious, the exact same data on your production array is on the DR array too. All of the considerations with synchronous replication still exist, the round trip latency added may cause a noticeable performance impact to clients.

Centralized Target

You can now have up to four sites replicating to a single site. This is a huge change as it allows you to minimize the cost and hardware requirements of having multiple sites. Prior to RecoverPoint 4.0 you would’ve needed four different RecoverPoint clusters each with their own set of RPAs to accomplish the same thing now possible.

Multiple Targets

And you can also replicate a single source to up to four targets if you want. I don’t see this quite as impactful as the being able to replicate to a centralized target, but it depends on how many copies of your data you want and how many sites you want to have protected against failure.

recoverpoint 4 - 4

Supported Splitters

Not really a new feature, more of a kick in the pants to anyone that used switched-based splitting (and those that had to learn how to install and support it). Using a switch-based splitter isn’t supported in RecoverPoint 4.0. Your options now are VMAX, VNX/CX, and VPLEX splitters.


Not really a new feature either, but very important to know the differences between the versions. If you plan on using the multiple-site replication, you will need to use RecoverPoint/EX or RecoverPoint/CL licensing.

There are some more new features, as well as performance and limitation enhancements, but the above list includes most of the big changes.

Choosing the Best Replication with VMware vCenter Site Recovery Manager: vSphere vs. Array-based

By | Replication, Virtualization, VMware | No Comments

I recently had the opportunity to implement VMware vCenter Site Recovery Manager (SRM) in three different environments using two different replication technologies (vSphere and Array-based Replication). The setup and configuration of the SRM software is pretty much straightforward. The differences come into play when deciding on what the best replication option is for your business needs.

vSphere Replication

vSphere Replication is built into SRM 5.0 and is included no matter what replication technology you decide to use. With vSphere Replication, you do not need to have costly identical storage arrays at both your sites, because the replication is managed through vCenter. With the ability to manage though vCenter, you are given more flexibility in regard to which VMs are protected. VMs can be protected individually, as opposed to doing so at the VMFS datastore. vSphere Replication is deployed and managed by virtual appliances installed at both sites. Replication is then handled by the ESXi hosts, with the assistance of the virtual appliances. vSphere Replication supports RPOs as low as 15 minutes.

[framed_box] vSphere Replication Benefits:

  • No need for costly storage arrays at both sites
  • More flexibility in choosing which VMs are protected (can do so individually)
[/framed_box] [divider_padding]

Array-based Replication

The two Array-based Replication technologies that I implemented were EMC MirrorView and EMC Symmetrix. Both of these tie into SRM using a storage replication adapter (SRA). The SRA is a program that is provided by the array vendor that allows SRM access to the array. Configuration of replication is done outside of vCenter at the array level. Unlike vSphere Replication, Array-based Replication requires you to protect an entire VMFS datastore or LUN, as opposed to individual VMs. One of the biggest benefits of Array-based Replication is its ability to provide automated re-protection of the VMs and near-zero RPOs.

[framed_box] vSphere Replication Benefits:

  • Automated re-protection of VMs
  • Near-zero RPOs
[/framed_box] [divider_padding]

Final Thoughts

VMware vCenter Site Recovery Manger gives you disaster recovery management that is highly sought after in today’s market, allowing you to perform planned migrations, failover and failback, automated failback and non-disruptive testing.

Photo credit: adamhenning via Flickr

Faster and Easier: Cloud-based Disaster Recovery Using Zerto

By | Cloud Computing, Disaster Recovery, How To, Replication, Virtualization | No Comments

Is your Disaster Recovery/Business Continuity plan ready for the cloud? Remember the days when implementing DR/BC meant having identical storage infrastructure at the remote site? The capital costs were outrageous! Plus, the products could be complex and time consuming to setup.

Virtualization has changed the way we view DR/BC. Today, it’s faster and easier than ever to setup. Zerto allows us to implement replication at the hypervisor layer. It is purpose built for virtual environments. The best part: it’s a software-only solution that is array agnostic and enterprise class. What does that mean? Gone are the days of having an identical storage infrastructure at the DR site. Instead, you replicate to your favorite storage—it doesn’t matter what you have. It allows you to reduce hardware costs by leveraging existing or lower-cost storage at the replication site.

zerto visio graphic

How does it work? You install the Zerto Virtual Manager on a Windows server at the primary and remote sites. Once installed, the rest of the configuration is completed through the Zerto tab in VMware vCenter. Simply select the Virtual Machines you want to protect and that’s about it. It supports fully automated failover and failback and the ability to test failover, while still protecting the production environment. Customers are able to achieve RTOs of minutes and RPOs of seconds through continuous replication and journal-based, point-in-time recovery.

Not only does Zerto protect your data, it also provides complete application protection and recovery through virtual protection groups.

Application protection:

  • Fully supports VMware VMotion, Storage VMotion, DRS, and HA
  • Journal-based point-in-time protection
  • Group policy and configuration
  • VSS Support

Don’t have a replication site? No problem. You can easily replicate your VMs to a cloud provider and spin them up in the event of a disaster.

Photo credit: josephacote on Flickr

Networking & The Importance Of VLANs

By | Networking, Replication, VMware | No Comments

We have become familiar with the term VLANs when talking about networking. Some people cringe and worry when they hear “VLAN”, while others rejoice and relish the idea. I used to be in the camp that cringed and worried – only because I did not have some basic knowledge about VLANs.

So let’s start with the basics: what is a VLAN? 

VLAN stands for Virtual Local Area Network and has the same characteristics and attributes as a physical Local Area Network (LAN). A VLAN is a separate IP sub-network which allows for multiple networks and subnets to reside on the same switched network – services that are typically provided by routers. A VLAN essentially becomes its own broadcast domain. VLANs can be structured by department, function, or protocol, allowing for a smaller layer of granularity. VLANs are defined on the switch by individual ports; this allows VLANs to be placed on specific ports to restrict access. 

A VLAN cannot communicate directly with another VLAN, which is done by design. If VLANs are required to communicate with one another the use of a router or layer 3 switching is required. VLANs are capable of spanning multiple switches and you can have more than one VLAN on multiple switches. For the most part VLANs are relatively easy to create and manage. Most switches allow for VLAN creation via Telnet and GUI interfaces, which is becoming increasingly popular.

VLAN’s can address many issues such as:

  1. Security – Security is an important function of VLANs. A VLAN will separate data that could be sensitive from the general network.  Thus allowing sensitive or confidential data to traverse the network decreasing the change that users will gain access to data that they are not authorized to see. Example: An HR Dept.’s computers/nodes can be placed in one VLAN and an Accounting Dept.’s can be place in another allowing this traffic to completely separate. This same principle can be applied to protocol such as NFS, CIFS, replication, VMware (vMotion) and management.
  2. Cost – Cost savings can be seen by eliminating the need for additional expensive network equipment. VLANs will also allow the network to work more efficiently and command better use of bandwidth and resources.
  3. Performance – Splitting up a switch into VLANs allows for multiple broadcast domains which reduces unnecessary traffic on the network and increases network performance.
  4. Management: VLANs allow for flexibility with the current infrastructure and for simplified administration of multiple network segments within one switching environment.

VLANs are a great resource and tool to assist in fine tuning your network. Don’t be afraid of VLANs, rather embrace them for the many benefits that they can bring to your infrastructure.

Photo Credit: ivanx

Protecting Exchange 2010 with EMC RecoverPoint and Replication Manager

By | Backup, Deduplication, Disaster Recovery, EMC, Replication, Storage | No Comments

Regular database backups of Microsoft Exchange environments are critical to maintaining the health and stability of the databases. Performing full backups of Exchange provides a database integrity checkpoint and commits transaction logs. There are many tools which can be leveraged to protect Microsoft Exchange environments, but one of the key challenges with traditional backups is the length of time that it takes to back up prior to committing the transaction logs.

Additionally, the database integrity should always be checked prior to backing up: to ensure the data being backed up is valid. This extended time often can interfere with daily activities – so it usually must be scheduled around other maintenance activities, such as daily defragmentation. What if you could eliminate the backup window time?

EMC RecoverPoint in conjunction with EMC Replication Manager can create application consistent replicas with next to zero impact, that can be used for staging to tape, direct recovery, or object level recovery with Recovery Storage Groups or third party applications. These replicas leverage Microsoft VSS technology to freeze the database, RecoverPoint bookmark technology to mark the image  time in the journal volume, and then thaw the database in a matter of less then thirty seconds – often in less than five seconds.

EMC Replication Manager is aware of all of the database server roles in the Microsoft Exchange 2010 Database Availability Group (DAG) infrastructure and can leverage any of the members (Primary, Local Replica, or Remote Replica) to be a replication source.

EMC Replication Manager automatically mounts the bookmarked replica images to a mount host running the Microsoft Exchange tools role and the EMC Replication Manager agent. The database and transaction logs are then verified using the essentials utility provided with the Microsoft Exchange tools. This ensures that the replica is a valid, recoverable copy of the database. The validation of the databases can take from a few minutes to several hours, depending on the number and size of databases and transaction log files. The key is: the load from this process does not impact the production database servers. Once the verification completes, EMC Replication Manager calls back to the production database to commit and delete the transaction logs.

Once the Microsoft Exchange database and transaction logs are validated, the files can be spun off to tape from the mount host, or depending on the retention requirement – you could eliminate tape backups of the Microsoft Exchange environment completely. Depending on the write load on the Microsoft Exchange server and how large the journal volumes for RecoverPoint are, you can maintain days or even weeks of retention/recovery images in a fairly small footprint – as compared to disk or tape based backup.

There are a number of recovery scenarios that are available from a solution based on RecoverPoint and Replication Manager. The images can be reversed synchronized to the source – this is a fast delta-based copy, but is data destructive. Alternatively, the database files could be copied from the mount host to a new drive and mounted as a recovery storage group on the Microsoft Exchange server. The database and log files can also be opened on the mount host directly with tools such as Kroll OnTrack for mailbox and message-level recovery.

Photo Credit: pinoldy

How To: Replicating VMware NFS Datastores With VNX Replicator

By | Backup, How To, Replication, Virtualization, VMware | No Comments

To follow up on my last blog regarding NFS Datastores, I will be addressing how to replicate VMware NFS Datastores with VNX replicator. Because NFS Datastores exist on VNX file systems, the NFS Datastores are able to replicate to an off-site VNX over a WAN. 

Leveraging VNX replicator allows you to use your existing WAN link to sync file systems with other VNX arrays. VNX only requires you to enable the Replication license of an offsite VNX and the use of your existing WAN link. There is no additional hardware other then the replicating VNX arrays and the WAN link.

VNX Replicator leverages checkpoints (snapshots) to record any changes made to the file systems. Once there are changes made to the FS the replication checkpoints initiates writes to the target keeping the FS in sync. 

Leveraging Replicator with VMware NFS DS will create a highly available virtual environment that will keep your NFS DS in sync and available remotely for whenever needed. VNX replicator will allow a maximum of ten minutes of “out-of-sync” time. Depending on WAN bandwidth and availability, your NFS DS can be restored ten minutes from the point of failure.

The actual NFS failover process can be very time consuming: once you initiate the failover process you will still have to mount the DS to the target virtual environment and add each VM into the inventory. When you finally have all of the VMs loaded, next you must configure the networking. 

Fortunately VMware Site Recovery Manager SRM has a plug-in which will automate the entire process. Once you have configured the policies for failover, SRM will mount all the NFS stores and bring the virtual environment online. These are just a few features of VNX replicator that can integrate with your systems, if you are looking for a deeper dive or other creative replication solutions, contact me.

Photo Credit: hisperati

To Snapshot Or Not To Snapshot? That Is The Question When Leveraging VNX Unified File Systems

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Replication, Security, VMware | No Comments

For those of you who are leveraging VNX Unified File systems, were you aware that you have the ability to checkpoint your file systems?

If you don’t know what checkpoints are, checkpoints are a point-in-time copy of your file system. The VNX gives you the ability to automate the checkpoint process. Checkpoints can run every hour, or any designated length of time, plus keep those files for whatever length of time is necessary (assuming of course that your data center has enough space available in the file system).

Checkpoints by default are read-only and are used to revert files, directories and/or the entire file system to a single point in time.  However, you can create writable checkpoints which allow you to snap an FS, export it, and test actual production data without affecting front-end production. 

VNX Checkpoint also leverages Microsoft VSS: allowing users to restore their files to previous points created by the VNX. With this integration you can allow users to restore their own files and avoid the usual calls from users who have accidently corrupted or deleted their files.  Yet, there are some concerns as to how big snapshots can get. VNX will dynamically increase the checkpoints based on how long you need them and how many you take on a daily basis. Typically the most a snapshot will take is 20% of the file system size and even that percentage is based on how much data you have and how frequently the data changes.

For file systems that are larger than 16TB, accruing successful backup can be a difficult task. With NDMP (network data management protocol) integration you are able to backup the checkpoints and store just the changes instead of the entire file system.

Take note that replicating file systems with other VNX arrays will carry your checkpoints over, giving you an off-site copy of the checkpoint made to the production FS. Backups on larger file systems can become an extremely difficult and time consuming job – by leveraging VNC Replicator and checkpoints you gain the ability to manage the availability of your data from any point in time you choose.

Photo Credit: Irargerich

Fun With DART’s & Replication Between A New VNXe And A Celerra NS-120

By | EMC, How To, Replication | No Comments

A couple of weeks ago I had some fun configuring a new VNXe to replicate with a Celerra NS-120. Here is a transcript of how I got it to work and some of the oddities I encountered:

1. I started out by checking the ESM whose configuration is supported according to the Support Matrix, as long as the Celerra is running DART 6.0. I then upgraded the NS20 (CX3-10 back-end running latest [and last] R26 code) to 6.0.41.

2. Moving along, I set up the interconnects using the “Unisphere” Hosts Replication Connection wizard. I validated the interconnects using nas_cel -interconnect -list on the NS20.

3. I had some initial issues with routing that were quickly resolved and the interconnects looked good to go.

4. This is where it gets dicey: I started out using the wizard on the NS20 Unisphere to replicate a filesystem. Apparently, the NS20 can’t set up replication destination storage and doesn’t seem to be able to enumerate/read the remote filesystem names.

I was able to see a list of remote filesystem IDs though, so this started me thinking : what if I could login to the remote “Celerra” (read, DART instance on the VNXe) to decode which filesystem correlated to which ID, i.e. run nas_fs -list ?

I tried SSH’ing to the VNXe and saw that it was shut down, so I started poking around in the service options and realized that I could enable SSH. I did that – SSH’ed to the box and logged in as “service”, because admin didn’t work. From there, I SU’d to nasadmin and was prompted for the root password. I tried nasadmin, the service password and a couple other passwords I knew of, but it timed out after three tries. However, I was in a nasadmin context so I ran the nas_fs -list command and got precisely what I was looking for – the list of filesystem ID’s to filesystem names.

5. Time services – for replication to work, you have to be within ten minutes (preferably five) of the respective datamovers. I thought I would proactively double-check and set the NTP on the VNXe “server_2” – however I was shut down, because that requires root permissions. Luckily time was pretty close so I was good there (NOTICE: the datamover was set to UTC – probably by design, but required conversion to local time).

6. By this time I realized that using the Celerra Manager/Unisphere/Wizards were not likely to work, so I logged on to the NS20 and ran the nas_replicate -create FS_Name -source -fs -id=552 -destination -fs id=30 -interconnect id=20003 -max_time_out_of_sync 10 command which erred out after a few minutes.

I did some digging on iView and found the primus article emc263903, which referenced logging in as root to run the command. Ok, I have the NS20 root password, so I did that and got the error message “destination filesystem not read-only”. I had created the “Shared Folder” (otherwise known to us old timers as a “File System”) as a replication target – don’t you think that if you are creating a replication target that the wizard would mount it as read-only?

7. Ok, back on the VNXe through SSH as nas admin: I run server_unmount and am prompted for the root password again, it errors out three times and then check – it’s unmounted! I run server_mount with -o ro, get prompted for the root password and error out three more times.

8. Back on the NS20, I re-run the nas_replicate command and it errors, this time with a “destination not empty” message. I used the -overwrite option, because when I provisioned the destination filesystem the minimum size that the wizard presented for the save was the same size as the destination filesystem …

Finally success: the filesystem is replicating!

Photo Credit: calypso_dayz