Category

Backup

How To: Replicating VMware NFS Datastores With VNX Replicator

By | Backup, How To, Replication, Virtualization, VMware | No Comments

To follow up on my last blog regarding NFS Datastores, I will be addressing how to replicate VMware NFS Datastores with VNX replicator. Because NFS Datastores exist on VNX file systems, the NFS Datastores are able to replicate to an off-site VNX over a WAN. 

Leveraging VNX replicator allows you to use your existing WAN link to sync file systems with other VNX arrays. VNX only requires you to enable the Replication license of an offsite VNX and the use of your existing WAN link. There is no additional hardware other then the replicating VNX arrays and the WAN link.

VNX Replicator leverages checkpoints (snapshots) to record any changes made to the file systems. Once there are changes made to the FS the replication checkpoints initiates writes to the target keeping the FS in sync. 

Leveraging Replicator with VMware NFS DS will create a highly available virtual environment that will keep your NFS DS in sync and available remotely for whenever needed. VNX replicator will allow a maximum of ten minutes of “out-of-sync” time. Depending on WAN bandwidth and availability, your NFS DS can be restored ten minutes from the point of failure.

The actual NFS failover process can be very time consuming: once you initiate the failover process you will still have to mount the DS to the target virtual environment and add each VM into the inventory. When you finally have all of the VMs loaded, next you must configure the networking. 

Fortunately VMware Site Recovery Manager SRM has a plug-in which will automate the entire process. Once you have configured the policies for failover, SRM will mount all the NFS stores and bring the virtual environment online. These are just a few features of VNX replicator that can integrate with your systems, if you are looking for a deeper dive or other creative replication solutions, contact me.

Photo Credit: hisperati

My Personal Journey To The Cloud: From Angry Birds to Business Critical Applications

By | Backup, Cloud Computing, Security, Storage, Virtualization | No Comments

Thinking back on it, I can very specifically remember when I started to really care about “The Cloud” and how drastically it has changed my current way of thinking about any services that are provided to me. Personally, the moment of clarity on cloud came shortly after I got both my iPhone and iPad and was becoming engrossed in the plethora of applications available to me. Everything from file sharing and trip planning to Angry Birds and Words with Friends … I was overwhelmed with the amount of things I could accomplish from my new mobile devices and how less dependent I was becoming on my physical location, or the specific device I was using, but completely dependent on the applications that I used on a day-to-day basis. Now I don’t care if I’m on my iPad at the beach or at home on my computer as long as I can access applications like TripIt or Dropbox because I know my information will be there regardless of my location.

As I became more used to this concept, I quickly became an application snob and wouldn’t consider any application that wouldn’t allow me cross-platform access to use from many (or all) of my devices. What good is storing my information in an application on my iPhone if I can’t access it from my iPad or home computer? As this concept was ingrained, I became intolerant of any applications that wouldn’t sync without my manual interaction. If I had to sync via a cable or a third party service, it was too inconvenient and would render the application useless to me in most cases. I needed applications that would make all connectivity and access magically happen behind the scenes, while providing me with the most seamless and simplistic user interface possible. Without even knowing it, I had become addicted to the cloud.

Cloud takes the emphasis away from infrastructure and puts it back where it should be: on the application. Do I, as a consumer, have anything to benefit from creating a grand infrastructure at home where my PC, iPhone, iPad, Android phone, and Mac can talk to one another? I could certainly develop some sort of complex scheme with a network of sync cables and custom-written software to interface between all of these different devices …

But how would I manage it? How would I maintain it as the devices and applications change? How would I ensure redundancy in all of the pieces so that a hardware or software failure wouldn’t take down the infrastructure that would become critical to my day-to-day activities? And how would I fund this venture?

I don’t want to worry about all of those things. I want a service … or a utility. I want something I can turn on and off and pay for only when I use it. I want someone else to maintain it for me and provide me SLAs so I don’t have to worry about the logistics on the backend. Very quickly I became a paying customer of Hulu, Netflix, Evernote, Dropbox, TripIt, LinkedIn, and a variety of other service providers. They provide me with the applications I require to solve the needs I have on a day-to-day basis. The beautiful part is that I don’t ever have to worry about anything but the application and the information that I put into it. Everything else is taken care of for me as part of a monthly or annual fee. I’m now free to access my data from anywhere, anytime, from any device and focus on what really matters to me.

If you think about it, this concept isn’t at all foreign to the business world. How many businesses out there really make their money from creating a sophisticated backend infrastructure and mechanisms for accessing that infrastructure? Sure, there are high-frequency trading firms and service providers that actually do make their money based on this. But the majority of businesses today run complex and expensive infrastructures simply because that is what their predecessors have handed down to them and they have no choice but to maintain it.

Why not shift that mindset and start considering a service or utility-based model? Why spend millions of dollars building a new state-of-the-art Data Center when they already exist all over the World and you can leverage them for an annual fee? Why not spend your time developing your applications and intellectual property which are more likely to be the secret to your company’s success and profitability and let someone else deal with the logistics of the backend?

This is what the cloud means to business right now. Is it perfect for everyone? Not even close. And unfortunately the industry is full of misleading cloud references because it is the biggest buzzword since “virtualization” and everyone wants to ride the wave. Providing a cloud for businesses is a very complex concept and requires a tremendous amount of strategy, vision, and security to be successful. If I’m TripIt and I lose your travel information while you’re leveraging my free service, do you really have a right to complain? If you’re an insurance company and your pay me thousands of dollars per month to securely house your customer records and I lose some of them, that’s a whole different ballgame. And unfortunately there have been far too many instances of downtime, lost data, and leaked personal information that the cloud seems to be moving from a white fluffy cloud surrounded by sunshine to an ominous gray cloud that brings bad weather and destruction.

The focus of my next few blogs will be on the realities of the cloud concept and how to sort through the myth and get to reality. There is a lot of good and bad out there and I want to highlight both so that you can make more informed decisions on where to use the cloud concept both personally and professionally to help you achieve more with less…because that’s what the whole concept is about. Do more by spending less money, with less effort, and less time.

I will be speaking on this topic at an exclusive breakfast seminar this month … to reserve your space please contact Shannon Nelson: snelson@idstorage.com .

Picture Credit: Shannon Nelson

The Future Of Cloud: Managing Your Data Without Managing Your Data

By | Backup, Cloud Computing, Disaster Recovery, How To | No Comments

The catch phrase of the last few years has been “The Cloud”. What REALLY is the  cloud?  By the consumer’s definition it is when I buy a video on Amazon and magically it is available to me anywhere I go. The video is then up in the ambiguous cloud. I don’t know what the hardware or software is, or even if it the data is in the same country as me. I just know it’s there and I sleep at night knowing that my investment is protected (I buy a lot of movies). There’s so much more to it than that and it is time that businesses begin to leverage the power of the cloud.

How can the cloud be applied to the business? In tough economic times the common saying is “Do more with less”. Let’s face it: Even in the best of times no one is going to walk up to the IT Director or CIO and say: “Here you go, more money!”. Instead it is a constant battle of doing more with less and in many instances we in the field are just trying to keep our heads above the water. CEO and department heads want all of their data protected, available, and accessible at any time and usually on a budget that frankly cannot cover all of the expenses. To plan a normal disaster recovery a number of factors have to be looked at:

  1. Where will the datacenter be ?
  2. How much will rack space, power, and cooling cost ?
  3. How many and what products do we need to install ?
  4. How will we manage it ?
  5. How will we connect to it and maintain redundancy ?
  6. Who will manage it ?
  7. Do we need to hire extra staff to manage it ?

That’s just a sample of the questions to even begin to start the project. It will also take months and maybe a year to design and implement the solution and will be very costly. This is where the cloud comes in. All of the resources you need are already available, protected, and scalable. Need more data storage? No problem. Need more compute power? We have that ready too. All it takes is an email. Really, who wants to manage physical servers anyways? It’s time to start looking at data, memory, and computing as simply just resources and less like a capital investment.

Beyond this, what is to stop you from running your entire infrastructure in the cloud?  Why not pay for your infrastructure the same way you pay the company phone bill? Here is where managed cloud services come into play, rather than importing more costs into your datacenter – you are exporting that time and money to a fraction of the cost with a managed services provider. IDS is ready, willing and able – just a click away.

Photo Credit: Fractal Artist

Do You Learn From Data Breaches And Disasters Or Observe Them?

By | Backup, Disaster Recovery, Security | No Comments

How many articles or blog posts have you read that talked about the “lessons we learned” from 9/11, the Japanese earthquake/tsunami, the Joplin tornado, Hurricane Katrina, or <insert disastrous event here>? I see them all the time, and after reading a very interesting article in the Winter issue of the Disaster Recovery Journal (you may have to register to view the full article), I got to thinking about this concept.

What is the indication that we have learned something? The word learn has several definitions, but my favorite (thanks to dictionary.com) is this:

to gain (a habit, mannerism, etc.) by experience, exposureto example, 

or the like; acquire …

If you learn something, you gain a new habit or mannerism; in other words, you change something.

What does it mean to observe? Again, from dictionary.com

to regard with attention, especially so as to see or learn something …

Just notice the difference. Learning means to take action, observing means to watch so you can learn. This really hits home with me and how I talk to my customers, because we talk A LOT about all of the lessons we have learned from various disasters. I don’t think it’s just me, either. Do a Google search on the phrase “lessons learned from hurricane Katrina” and you get 495,000 hits. Do a search on “lessons learned from Japanese tsunami” and you get 2.64 million hits. This gets talked about A LOT.

But how much are we really learning? After Katrina, how many of you proactively, objectively assessed or had someone assess your ability to maintain a revenue stream if a debilitating disaster struck your center of operations, whatever your business is? How many of you looked at what happened in Japan, or in Joplin, MO, and said: if that happened to us, we’d be able to sustain our business and we aren’t just fooling ourselves?

Let’s put this in a less dramatic and more regularly occurring context. How many of you saw the completely insane events surrounding the breach of HBGary and actually DID SOMETHING to change behavior, or build new habits to insure you didn’t suffer a similar fate? Many of us observed the event, were aghast at it’s simplicity of execution and the thoroughness with which information was exposed, but how many people actually changed the way their security is addressed and learned from the event? Have you looked at the ten year breach at Nortel, or the data breach at Symantec and set in motion a course of events in your own organization that will do everything possible to prevent similar issues in your organization?

These problems are not going away. They are becoming more and more prevalent and they are not solely the problem of global Fortune 500 companies. Any organization who does any type of business – has data that could potentially be useful for nefarious purposes in the wrong hands. It is our responsibility as stewards of the data to learn the lessons and take action to secure and protect our data as though it was our money — because it is.

Photo Credit: Cherice

To Snapshot Or Not To Snapshot? That Is The Question When Leveraging VNX Unified File Systems

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Replication, Security, VMware | No Comments

For those of you who are leveraging VNX Unified File systems, were you aware that you have the ability to checkpoint your file systems?

If you don’t know what checkpoints are, checkpoints are a point-in-time copy of your file system. The VNX gives you the ability to automate the checkpoint process. Checkpoints can run every hour, or any designated length of time, plus keep those files for whatever length of time is necessary (assuming of course that your data center has enough space available in the file system).

Checkpoints by default are read-only and are used to revert files, directories and/or the entire file system to a single point in time.  However, you can create writable checkpoints which allow you to snap an FS, export it, and test actual production data without affecting front-end production. 

VNX Checkpoint also leverages Microsoft VSS: allowing users to restore their files to previous points created by the VNX. With this integration you can allow users to restore their own files and avoid the usual calls from users who have accidently corrupted or deleted their files.  Yet, there are some concerns as to how big snapshots can get. VNX will dynamically increase the checkpoints based on how long you need them and how many you take on a daily basis. Typically the most a snapshot will take is 20% of the file system size and even that percentage is based on how much data you have and how frequently the data changes.

For file systems that are larger than 16TB, accruing successful backup can be a difficult task. With NDMP (network data management protocol) integration you are able to backup the checkpoints and store just the changes instead of the entire file system.

Take note that replicating file systems with other VNX arrays will carry your checkpoints over, giving you an off-site copy of the checkpoint made to the production FS. Backups on larger file systems can become an extremely difficult and time consuming job – by leveraging VNC Replicator and checkpoints you gain the ability to manage the availability of your data from any point in time you choose.

Photo Credit: Irargerich

Top 3 Security Resolutions For 2012: Moving Forward From “The Year Of The Breach”

By | Backup, Data Loss Prevention, Disaster Recovery, How To, Security | No Comments
I always feel a sense of renewal with the turn of the calendar. Many people use this time to set new goals for the new year and take the opportunity to get re-grounded and move toward accomplishing their goals. Yet, as I reflect on the security landscape in 2011, aptly named “The Year of the Breach”; I thought it would be a perfect time to make some resolutions for 2012 that everyone with any data to protect could benefit from.

 

1. Focus More on Security and Not Just on Compliance

On a day to day basis I speak to a wide range of companies and often see organizations who are so concerned about checking the box for compliance that they lose sight of actually minimizing risk and protecting data. Regardless of the regulation in the long list of alphabet soup (SOX, GLBA, PCI, HIPAA) – maintaining compliance is a daunting task.
 
As a security practitioner, focusing on limiting exposure to every business has always been my key concern. How can I enable the business while also minimizing risk? With this mindset, compliance helps to ensure that I am doing my due diligence and that all of my documentation is in order to prove that I’m doing my due diligence to keep our customers and stakeholders happy and protected.
 
2. Ready Yourself for Mobile Device Explosion
 
The iPad is a pretty cool device. I’m no Apple Fanboy by any stretch, but this tablet perfectly bridges the gap between my smart phone and my laptop. I am not the only one seeing these devices becoming more prevalent in the workforce as well. People are using them to take notes in meetings and give presentations, yet users are not driving the business to support these devices. Many organizations instead are simply allowing their employees to purchase their own devices and use them on corporate networks.
 
If employees can work remotely and be more happy and efficient with these devices, security admins can’t and shouldn’t stand in the way. We must focus on protecting these endpoints to ensure they don’t get infected with malware. We’ve also got to protect the data on these devices to ensure that corporate data isn’t misused or stolen when spread over so many variations of devices.
 
3. Play Offense, Not Defense
 
I’ve worked in IT Security for a long time and unfortunatley along the way I’ve seen and heard a lot of things that I wish I hadn’t. Yet, I can’t afford to have my head in the sand regarding security. I need to have my finger on the pulse of the organization and understand what’s happening in the business. It’s important that I also understand how data is being used and why. Once this happens, I am able to put controls in place and be in a better position to recognize when something is abnormal. With the prevalence of bot-nets and other malware, it is taking organizations 4-16 weeks before they even realize they have been compromised. Once this surfaces, they have to play catchup in order to assess the damage, clean the infection and plug the holes that were found. Breaches can be stopped before they start, if the company and/or security admin are adamant about being on the offense.
 
These are my top three resolutions to focus on for 2012 – what is your list? I invite you to list your security resolutions in the comment section below, I’d love to know what your organization is focused on!
 
Photo Credit: simplyla
 
 

The Shifting IT Workforce Paradigm Part II: Why Don’t YOU Know How “IT” Works In Your Organization?

By | Backup, Cloud Computing, How To, Networking, Security | No Comments

When I write about CIO’s taking an increased business-oriented stance in their jobs, I sometimes forget that without a team of people who are both willing and able to do that, their ability to get out of the data center and into the board room is drastically hampered.

I work with a company from time to time that embodies for me the “nirvana state” of IT: they know how to increase revenue for the organization. They do this while still maintaining focus on IT’s other two jobs — avoiding risk and reducing cost. How do they accomplish this? They know how their business works, and they know how their business uses their applications. The guys in this IT shop can tell you precisely how many IOPS any type of end-customer business transaction will create. They know that if they can do something with their network, their code, and/or their gear that provides an additional I/O or CPU tick back to the applications, they can serve X number of transactions and that translates into Y dollars in revenue, and if they can do that without buying anything, it creates P profit.

The guys I work with aren’t the CIO, although I do have relationships with the COO, VP of IT, etc. To clarify – there aren’t business analysts who crossed over into IT from the business who provide this insight. These are the developers, infrastructure guys, security specialists, etc. At this point, I think if I asked the administrative assistant who greets me at the door every visit, she’d be able to tell me how her job translates into the business process and how it affects revenue.

Some might say that since this particular company is a custom development shop that should be easy. Yet, they have to know the business processes to write the code that drives them. Yes and no. I think that most people who make that statement haven’t closely examined the developers coming out of college these days. I have a number of nieces, nephews, and children of close friends who are all going into IT, and let me tell you, the stuff they’re teaching in the development classes these kids are taking isn’t about optimization of code to a business process and it isn’t about utility of IT.

It’s about teaching a foreign language more than teaching them the ‘why you do this’ of things. You’re not getting this kind of thought and thought-provoking behavior out of the current generation of graduates. This comes from caring. In my estimation it comes from those at the top giving enough latitude to make intelligent decisions and demanding that people understand what the company is doing and more importantly – where they want to be.

They set goals, clarify those goals, and they make it clear that everyone in the organization can and does play a role in achieving those goals. These guys don’t go to work every day wondering why they are sitting in the cubicle, behind the desk, with eight different colored lists on their whiteboard around a couple of intricately complicated diagrams depicting a new code release. They aren’t cogs in a machine, and they’re made not to feel as though they are. If you want to be a cog, you don’t fit in this org, pretty simple.  That’s the impression I get of them, anyway.

The other important piece of this is that they don’t trust their vendors. That’s probably the wrong way to say it. It’s more about questioning everything from their partners, taking nothing for granted, and demanding that their vendors explain how everything works so they understand how they plug into it and then take advantage of it. They don’t buy technology for the sake of buying technology. If older gear works, they keep the older gear, but they understand the ceiling of that gear, and before they hit it, they acquire new. They don’t always buy the cheapest, but they buy the gear that will drive greater profitability for the business.

That’s how IT should be buying. Not a cost matrix of four different vendors who are all fighting to be the apple the others are compared to. Rather – which solution will help me be more profitable as a business because I can drive more customer transactions through the system? Of course, 99% of organizations I deal with couldn’t tell you what the cost of a business transaction is. Probably 90% of them couldn’t tell you what the business transaction of their organization looks like.

These guys aren’t perfect, they have holes. They are probably staffed too thin to reach peak efficiency and they could take advantage of some newer technologies to be more effective. They could probably use a little more process in a few areas. But at the end of the day, they get it. They get that IT matters, they get that information is the linchpin to their business, and they get that if the people who work in the organization care, the organization is better. They understand that their business is unique and they have a limited ability to stay ahead of the much larger companies in their field; thus they must innovate, but never stray too far from their foundation or the core business will suffer.

It’s refreshing to work with a company like this. I wish there were more stories like this organization and that the trade rags would highlight them more prominently. They deserve a lot of credit for how they operate and what they look to IT to do for them.

Even though I can’t name them here I’ll just say good job guys, keep it up, and thanks for working with us.

Photo Credit: comedy_nose

 

Following “The Year of the Breach” IT Security Spending Is On The Rise

By | Backup, Data Loss Prevention, Disaster Recovery, RSA, Security, Virtualization | No Comments

In IT circles, the year 2011 is now known as “The Year of the Breach”. Major companies such as RSA, Sony, Epsilon, PBS, Citigroup, etc. have experienced serious high profile attacks. Which begs the question: if major players such as these huge multi-million dollar companies are being breached, what does that mean for my company? How can I take adequate precautions to ensure that I’m protecting my organization’s data?

If you’ve asked yourself these questions, you’re in good company. A recent study released by TheInfoPro states that:
37% of information security professionals are planning to increase their security spending in 2012.
In light of the recent security breaches, as well as the increased prevalence of mobile devices within the workplace, IT security is currently top of mind for many organizations. In fact, with most of the companies that IDS is working with I’m also seeing executives taking more of an interest in IT security. CEO’s and CIO’s are gaining a better understanding of technology and what is necessary to improve the company’s security position in the future. This is a huge win for security practitioners and administrators because they are now able to get the top level buy-in needed to make important investments in infrastructure. IT security is fast becoming part of the conversation when making business decisions.
 
I expect the IT infrastructure to continue to rapidly change as virtualization continues to grow and cloud-based infrastructures become more mature. We’re also dealing with an increasingly mobile workforce where employees are using their own laptops, smart phones and tablets instead of those issued by the company. Protection of these assets become even more important as compliance regulations become increasingly strict and true enforcement begins.
 
Some of the technologies that have grown in 2011 and which I foresee increasing their growth in 2012, include Data Loss Prevention, Application-aware Firewalls and Enterprise Governance Risk and Compliance. Each of these technologies focus on protecting sensitive information to ensure that authorized individuals are using this information responsibly. Moving forward into 2012, my security crystal ball tells me that everyone, top level down will increase not only their security spend, but most importantly their awareness of IT security and just how much their organizations data is worth to protect.
 
Photo Credit: Don Hankins
 

Internet Running Out of IP Addresses?! Fear Not, IPv6 Here to Save the Day

By | Backup, How To, Networking | No Comments

As everyone may (or may not be) aware, we are running out of IP version 4 addresses. Okay, not really, but they have almost all been given out to service providers to pass on to customers. At that point, they will eventually run out. Fear not. This doesn’t mean that the internet will come to a screeching halt.  It only means that it will be time to move on to the next iteration of networking called IP version 6 (IPv6 for short).  Most of the rest of the world is already running it to high degree.

With this post, I’m going to take some time to lift the veil off of this. The reason is that every time I mention it to anyone, be it a customer, old coworker, or longtime networker, it draws a sense of fear. Don’t be afraid of IPv6, people! It’s not as scary as it seems.

Let’s start with a quick comparison. Currently, there are approximately 4.3 billion IPv4 addresses using the current 32 bit scheme. That’s less than 1 for every person in the world! Think about how many you are using right now.  Here’s me:

1.  Cell phone

2. Air card

3. Home network

4. My work computer

5. TV

We’ve gotten around the limitation by using something called Port Address Translation (PAT). PAT should really be called “PATCH,” because we are patching the IPv4 network due to a gross underestimate of the growth of the internet. PAT normally occurs on a firewall. We can use one public IP address to represent outgoing/incoming traffic to our network. That is why we have RFC 1918 addresses (10/8, 192.168…and so on). Those addresses needed to be reserved so that they could hide behind a public IP address, and therefore every company could have as many IP addresses as they needed. Because of the reserved address space, the available IP addresses are layout 3.2 billion. That’s less than 1 for ever two people!

Theoretically, a single PAT IP could represent over 65000 clients (you may see flames begin to shoot out of your firewall). So, what are the drawbacks? For one, it requires a higher degree of difficulty to troubleshoot connection issues. Also, setting firewall rules become more difficult and can result in connectivity issues. Plus, the idea of end-to-end connectivity is thrown out the door since it truly is not at that point. Lastly, as translations occur, you are placing higher and higher loads on firewall, which could be doing other things such as improving latency and throughput.  PAT’s time is through!  Thanks, but good riddance!

IPv6 uses 128 bit addressing.  That’s about 340,000,000,000,000,000,000,000,000,000,000,000,000 or 18,000,000,000,000,000,000 for every person on earth.  For a comparison in binary:

IPv4: 10101010101010101010101010101010

IPv6:  10101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010101010
1010101010101010101010101010101010101010101010101010101010101010101010101010101010

Luckily, IPv6 addressing is represented in HEX.  Though the above binary number looks painful and overwhelming, a single IPV6 address on your network can be as simple as this:

2002::1/64

That’s not so bad, is it? In a follow-up post, I will demystify the IPv6 addressing scheme.

For up to date IPv6  statistics and IPv4 exhaustion dates around the world, look here:  http://www.apnic.net/community/ipv6-program

Photo credit: carlospons via Flickr

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:

SnapMirror

  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.

RecoverPoint

  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk

float(4)