Disaster Recovery

Data protection/cyber security concept with lock

Is Your Data Protection Risky or Right-on?

By | Cybersecurity, Data Breach, Data Loss Prevention, Disaster Recovery, Personal Data Management, Security | No Comments

A Three-step Assessment Guide

Data is your company’s most valuable asset. Without it, you don’t have a company at all. Even so, many enterprises do not dedicate the resources needed to ensure their data protection strategy and solutions are covering them effectively. More often than not, I see considerable lag time between when enterprises invest in new technology and when they invest in an appropriate solution to protect it. This gap is a perilous window during which data is ripe for theft, corruption and/or loss. Read More

Daas (Desktop as a Service)

The DaaS Revolution: It’s Time

By | Cloud Computing, Daas, Desktop Virtualization, Disaster Recovery, Infrastructure, Networking, Security, Uncategorized, Virtualization | No Comments

When terminal services was first released, it revolutionized the way users accessed their applications and data. Employees could tap into their digital resources from essentially anywhere while businesses could be certain that all data was located in the data center and not on end user devices. But no revolution is perfect and the challenge with terminal services (renamed to RDSH in recent years) was that it did not give users the customization and experience of their local devices.  Read More

The Case for Disaster Recovery Validation

The Case for Disaster Recovery Validation

By | Backup, Disaster Recovery, How To | No Comments

Disaster Recovery Planning (DRP) has gotten much attention in the wake of natural and man-made disasters in the recent years. But Executives continue to doubt the ability of IT to restore business IT infrastructure after a serious disaster. And this does not even include the increasing number of security breaches worldwide. By many reports, the confidence level in IT recovery processes is less than 30%, bringing to question the vast amounts of investment poured into recovery practices and recovery products. Clearly, backup vendors are busy – see compiled list of backup products and services at the end of this article (errors and omissions regretted). Read More

IDS Cloud Update: Exploring Zerto Technologies

IDS Cloud Update: Zerto Technology Review

By | Cloud Computing, Disaster Recovery, IDS | No Comments

At IDS we are continuously evaluating the effectiveness of new products and partners to protect the integrity of our IDS Cloud Services. We built the IDS Cloud to deliver public, private and hybrid Cloud Solutions that facilitate increased efficiency within IT operations. As we evaluate the continually changing technology landscape, we always have our customers in mind. In this edition of the Monthly IDS Cloud Update, we’d like to highlight our experiences with Zerto, a partner we utilize to deliver a flexible and efficient Cloud Disaster Recovery Service offering to our customers.

Zerto uses an innovative Virtual Replication technology specifically built for virtual environments, which delivers disaster recovery functionality with industry-leading automation for testing and failover.

Zerto makes Cloud DR Services efficient and easy to use for customers, a value that shouldn’t be overlooked in the IT industry.

While there are many benefits to Zerto’s technology offerings, today we’d like to breakdown exactly why we chose Zerto to power our IDS Cloud DR services.

Benefits of IDS Cloud DR Services Powered by Zerto

  • Easy setup. The Zerto Cloud DR service installs remotely within hours, with no complicated services required.
  • Customer control. By using Zerto to power the IDS Cloud DR Services, customers have the flexibility to choose which applications to protect, regardless of the storage they live on.
  • Control failover. Zerto enables automated data recovery, failover and failback and lets you select any VM in VMWare’s vCenter. There is no agent required and this process is automated through a vCenter plug-in.
  • Simplified conversions from HyperV to VMware. Zerto Virtual Replication is the first technology with the capacity to automatically convert HyperV to VMWare for seamless migrations between hypervisors.
  • Secure Multi-tenancy. Zerto’s secure multi-tenancy architecture delivers a secure platform for replication to the Cloud, while providing the security required for companies with strict compliance requirements.
  • Flexible control of replication schedule. Zerto compresses changes in the range of 50%+ in order to maintain a consistently low Recovery Point Objective (RPO), and allows for a bandwidth threshold to be assigned for replication as to not impact other services utilizing WAN links.
  • Storage array agnostic. Zerto has the capability to replicate from any storage to any other storage, allowing customers to completely migrate data from one array, vendor or site to another efficiently.
  • Insightful reporting for customer. Zerto’s dashboard gives customers easy access to SLA information, providing great insight for the customer into their Disaster Recovery environment.

Zerto powers a comprehensive IDS Cloud DR Service that eradicates concerns about performance, availability and security while facilitating savings on resource costs.

Stay tuned for more information about the IDS Cloud by following the Monthly IDS Cloud Update.

Sneakernet vs. WAN: When Moving Data With Your Feet Beats Using The Network

By | Disaster Recovery, Networking, Strategy | No Comments

Andrew S. Tanenbaum was quoted in 1981 as saying “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

The underlying story on this is written in the non-fiction section of Wikipedia. It was derived from NASA and their Deep Space network tracking station between Goldstone, CA and their other location at Jet Propulsion Labs about 180 miles away. Common today, as much as it were 30 years ago, a backhoe took out the 2400bps circuit between the two locations. The estimate to fix it was about one full day. So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

That got me to thinking about IT and business projects that require pre-staging data. Normally, we IT folks get wind of a project weeks or months in advance. With such ample notice, how much data can we pre-stage in that amount of time?

With a simple 100Mbit connection between locations, and using a conservative compression ratio, we can move nearly 1TB of data in a day. That seems plenty of time to move source installation files, ISOs, and even large databases. Remembering that our most precious resource is time, anything a script or computer can do instead of us manually doing is worth a careful consideration.

Below is a chart listing out common bandwidth options and the time to complete a data transfer.

chart - 1

The above example is not as much about data center RPOs and RTOs, as it is about just moving data from one location to another. For DR objectives, we need to size our circuit so that we never fall below the minimums during critical times.

For example, if we have two data center locations with a circuit in between, and our daily change rate of 100TB of data is 3%, we will still need to find the peak data change rate timeframe before we can size the circuit properly.

chart - 2

If 50% of the data change rate occurs from 9am to 3pm, then we need a circuit that can sustain 250GB per hour. A dedicated gigabit circuit can handle this traffic, but only if it’s a low latency connection (the location are relatively close to one another). If there’s latency, we will most certainly need a WAN optimization product in between. But in the event of a full re-sync of data, it would take 9-10 days to move all that data over the wire plus the daily change rate. So unless we have RPOs and RTOs measuring weeks, or unless we have weeks to ramp-up to a DR project, we will have a tough time during a full re-sync, and wouldn’t be able to rely on DR during this time.

So, that might be a case where it makes sense to sneakernet the data from one location to the other.

Photo credits via Flickr: Nora Kuby

The Many Faces of Archiving (And Why It Isn’t Backup)

By | Archiving, Backup, Disaster Recovery | No Comments

If archiving is defined as intelligent data management, then neither Backup Technologies, nor Hierarchical Storage Management (HSM) techniques, nor Storage Resource Management (SRM) tools qualify; however, these continue to be leveraged for archiving as substitute products. Even “Information Lifecycle Management” that would benefit from archiving is now equated with archiving. This has led to a proliferation of archiving products that tend to serve different purposes for different organizations.


IT organizations have long valued the notion of preserving copies of data in case “work” got lost. In fact, with every occurrence of data disaster, the role of data backup operations has strengthened and no company can do without a strategy in place. Since 1951, when Mauchly and Eckert ushered in the era of digital computing with the construction of UNIVAC, the computing industry has seen all kinds of media in which storage could be kept for later recall: punch cards, magnetic tapes, floppy disks, hard drives, CD-R/RW, flash drives, DVD, Blue-ray and HD-DVD to name a few. And the varying formats and delivery methods have helped create generations of vendors with competing technologies.

Backups had come of age … but also became increasingly costly and hard to manage with data complexity, growth and retention.

Backups had come of age, cloaked and dressed with a respectable name “data protection”—the magic wand that was insurance for “data loss.” But, it also became increasingly costly and hard to manage with data complexity, growth and retention. Thus came about the concept of “archiving,” defined simply as “long term data.” That, coupled with another smart idea for moving data to less expensive storage (tier), helped IT organizations to reduce costs. The HSM technique dovetails into tiered storage management, as it is really a method to move data that is not changing or not being accessed frequently. HSM was first implemented by IBM and also by DEC VAX/VMS systems. In practice, HSM is typically performed by dedicated software, such as IBM Tivoli Storage Manager, Oracle’s SAM-QFS, Quantum SGI DMF, StorNext or EMC Legato OTG DiskXtender.

On the other hand, SRM tools evolved as quota management tools for companies trying to deal with hard-to-control data growth, and now include SAN management functions. Many of the HSM players sell tools in this space as well: IBM Tivoli Storage Productivity Center, Quantum Vision, EMC Storage Resource management Suite, HP Storage Essentials, HDS Storage Services Manager (Aptare) and NetApp SANscreen (Onaro). Other SRM products include Quest Storage Horizon (Monosphere), SolarWinds Storage Profiler (Tek-Tools) and CA Storage Resource Manager. Such tools are able to provide analysis, create reports and target inefficiencies in the system, creating a “containment” approach to archiving.

Almost as old as the HSM technique is the concept of Information Lifecycle Management (ILM). ILM recognizes archiving as an important function distinct from backup. In 2004, SNIA gave ILM a broader definition by aligning it with business processes and value, while associating it with five functional phases: Creation and Receipt; Distribution; Use; Maintenance; Disposition. Storage and Backup vendors embraced the ILM “buzzword” and re-packaged their products as ILM solutions, cleverly embedding HSM tools in “policy engines.” And so, with these varied implementations of “archiving tools,” businesses have come to realize different levels of satisfaction.


Kelly J. Lipp, who today evaluates products from the Active Archive Alliance members, wrote (in 1999) the paper entitled “Why archive is archive, backup is backup and backup ain’t archive.” Kelly wrote this simple definition: “Backup is short term and archive is long term.” He then ended the paper with this profound statement: “We can’t possibly archive all of the data, and we don’t need to. Use your unique business requirements and the proper tools to solve your backup and archive issues.”

Backup is short term and archive is long term.
— Kelly Lipp

However, the Active Archive Alliance promotes a “combined solution of open systems applications, disk, and tape hardware that gives users an effortless means to store and manage ALL their data.” ALL their data? Yes, say many of the pundits who rely on search engines to “mine” for hidden nuggets of information.

Exponential data growth is pushing all existing data management technologies to their limits, and newer locations for storing data—the latest being “storage clouds”—attempt to solve the management dilemma. But the realization that for the bulk of data that is “unstructured,” there is no orderly process to bring back information that is of value to the business brings increasing concern.

Similarly, though, to the clutter stored in our basement, data that collects meaninglessly may become “data blot.”

Although businesses rely on IT to safeguard data, the value of the information contained therein is not always known to IT. Working with available tools, IT chooses attributes such as age and size and location to measure the worth, and then executes “archiving” to move this data out, so that computing systems may perform adequately. Similarly, though, to the clutter stored in our basement, data that collects meaninglessly may become “data blot.” Data survival then depends on proper information classification and organization.

Traditionally, data has seen formal organization in the form of databases—all variations of SQL, email and document management included. With the advent of Big Data and the use of ecosystems such as “Hadoop,” large databases now leverage flat file systems that are better suited for mapping search algorithms. And this may be considered as yet another form of archiving because data stored here is “immutable” anyway. All of these databases (and the many related applications) tend to have more formal archiving processes, but little visibility into the underlying storage. Newer legal and security requirements tend to focus on such databases, leading to the rise of “archiving” for compliance.

That brings us back full circle. While security and legality play a lot in today’s archiving world, one could argue that these tend to create “pseudo archives” that can be removed (deleted) when the stipulated time has passed. In contrast, a book or film on digital media adds to the important assets of a company that become the basis for its valuation and for future ideas. If one were to create a literature masterpiece, the file security surrounding the digitized asset is less consequential than the fact that 100 years later those files would still be valuable to the organization that owns it.

Archiving … is the preservation of a business’s digital assets: information that is valuable irrespective of time and needed when needed.

The meaning of archiving becomes clearer when viewed as distinctly different from backup. It is widely accepted that purpose of a backup is to restore lost data. Thus, backup is preservation of “work in progress”: data that does not have to be understood, but resurrected as-is when needed. Archiving, on the other hand, is the preservation of a business’s digital assets: information that is valuable irrespective of time and needed when needed. The purpose of archiving is to hold assets in a meaningful way for later recall.

Backup is a simple IT process. Archiving is tied to business flow.

This suggests that archiving does not need “policy engines” and “security strongholds,” but rather information grouping, classification, search and association. Because these tend to be industry-specific, “knowledge engines” would be more appropriate for archiving tools. Increasingly, IT professional services are now working with businesses and vendors alike to bridge the gaps and bring about dramatic industry transformations through the implementation of intelligent archiving.


Backups have grown in importance since the days of early computing, and as technology has changed, so has the costs for preserving the data in different storage media. Backup technologies also have become substitute tools for archives by choosing long-term retention for those data.

With a plethora of tools and techniques developed to manage the storage growth, and contain the storage costs (the HSM techniques and the SRM tools), archiving has been implemented in different organizations for different purposes and with different meaning.

In defining Information Lifecycle Management, SNIA has elevated the importance of archiving, and thereby encouraged vendors to re-package HSM tools in policy engines. On the other hand, databases for SQL and email—and even Big Data ecosystems—have implemented archiving without visibility into the underlying storage.

As archiving tools continue to evolve, it is now considered distinctly different from backup. While backup protects “work in progress,” archiving preserves valuable business information. Unlike backups that need “policy engines,” archiving requires “knowledge engines” which may be industry-specific. IT professional services have stepped in to bridge the gaps and bring about transformations through the implementation of intelligent archiving.


1. “Why archive is archive, backup is backup and backup ain’t archive” by Kelly J. Lipp, 1999

Photo credit: Antaratma via Flickr

Faster and Easier: Cloud-based Disaster Recovery Using Zerto

By | Cloud Computing, Disaster Recovery, How To, Replication, Virtualization | No Comments

Is your Disaster Recovery/Business Continuity plan ready for the cloud? Remember the days when implementing DR/BC meant having identical storage infrastructure at the remote site? The capital costs were outrageous! Plus, the products could be complex and time consuming to setup.

Virtualization has changed the way we view DR/BC. Today, it’s faster and easier than ever to setup. Zerto allows us to implement replication at the hypervisor layer. It is purpose built for virtual environments. The best part: it’s a software-only solution that is array agnostic and enterprise class. What does that mean? Gone are the days of having an identical storage infrastructure at the DR site. Instead, you replicate to your favorite storage—it doesn’t matter what you have. It allows you to reduce hardware costs by leveraging existing or lower-cost storage at the replication site.

zerto visio graphic

How does it work? You install the Zerto Virtual Manager on a Windows server at the primary and remote sites. Once installed, the rest of the configuration is completed through the Zerto tab in VMware vCenter. Simply select the Virtual Machines you want to protect and that’s about it. It supports fully automated failover and failback and the ability to test failover, while still protecting the production environment. Customers are able to achieve RTOs of minutes and RPOs of seconds through continuous replication and journal-based, point-in-time recovery.

Not only does Zerto protect your data, it also provides complete application protection and recovery through virtual protection groups.

Application protection:

  • Fully supports VMware VMotion, Storage VMotion, DRS, and HA
  • Journal-based point-in-time protection
  • Group policy and configuration
  • VSS Support

Don’t have a replication site? No problem. You can easily replicate your VMs to a cloud provider and spin them up in the event of a disaster.

Photo credit: josephacote on Flickr

Protecting Exchange 2010 with EMC RecoverPoint and Replication Manager

By | Backup, Deduplication, Disaster Recovery, EMC, Replication, Storage | No Comments

Regular database backups of Microsoft Exchange environments are critical to maintaining the health and stability of the databases. Performing full backups of Exchange provides a database integrity checkpoint and commits transaction logs. There are many tools which can be leveraged to protect Microsoft Exchange environments, but one of the key challenges with traditional backups is the length of time that it takes to back up prior to committing the transaction logs.

Additionally, the database integrity should always be checked prior to backing up: to ensure the data being backed up is valid. This extended time often can interfere with daily activities – so it usually must be scheduled around other maintenance activities, such as daily defragmentation. What if you could eliminate the backup window time?

EMC RecoverPoint in conjunction with EMC Replication Manager can create application consistent replicas with next to zero impact, that can be used for staging to tape, direct recovery, or object level recovery with Recovery Storage Groups or third party applications. These replicas leverage Microsoft VSS technology to freeze the database, RecoverPoint bookmark technology to mark the image  time in the journal volume, and then thaw the database in a matter of less then thirty seconds – often in less than five seconds.

EMC Replication Manager is aware of all of the database server roles in the Microsoft Exchange 2010 Database Availability Group (DAG) infrastructure and can leverage any of the members (Primary, Local Replica, or Remote Replica) to be a replication source.

EMC Replication Manager automatically mounts the bookmarked replica images to a mount host running the Microsoft Exchange tools role and the EMC Replication Manager agent. The database and transaction logs are then verified using the essentials utility provided with the Microsoft Exchange tools. This ensures that the replica is a valid, recoverable copy of the database. The validation of the databases can take from a few minutes to several hours, depending on the number and size of databases and transaction log files. The key is: the load from this process does not impact the production database servers. Once the verification completes, EMC Replication Manager calls back to the production database to commit and delete the transaction logs.

Once the Microsoft Exchange database and transaction logs are validated, the files can be spun off to tape from the mount host, or depending on the retention requirement – you could eliminate tape backups of the Microsoft Exchange environment completely. Depending on the write load on the Microsoft Exchange server and how large the journal volumes for RecoverPoint are, you can maintain days or even weeks of retention/recovery images in a fairly small footprint – as compared to disk or tape based backup.

There are a number of recovery scenarios that are available from a solution based on RecoverPoint and Replication Manager. The images can be reversed synchronized to the source – this is a fast delta-based copy, but is data destructive. Alternatively, the database files could be copied from the mount host to a new drive and mounted as a recovery storage group on the Microsoft Exchange server. The database and log files can also be opened on the mount host directly with tools such as Kroll OnTrack for mailbox and message-level recovery.

Photo Credit: pinoldy

The Future Of Cloud: Managing Your Data Without Managing Your Data

By | Backup, Cloud Computing, Disaster Recovery, How To | No Comments

The catch phrase of the last few years has been “The Cloud”. What REALLY is the  cloud?  By the consumer’s definition it is when I buy a video on Amazon and magically it is available to me anywhere I go. The video is then up in the ambiguous cloud. I don’t know what the hardware or software is, or even if it the data is in the same country as me. I just know it’s there and I sleep at night knowing that my investment is protected (I buy a lot of movies). There’s so much more to it than that and it is time that businesses begin to leverage the power of the cloud.

How can the cloud be applied to the business? In tough economic times the common saying is “Do more with less”. Let’s face it: Even in the best of times no one is going to walk up to the IT Director or CIO and say: “Here you go, more money!”. Instead it is a constant battle of doing more with less and in many instances we in the field are just trying to keep our heads above the water. CEO and department heads want all of their data protected, available, and accessible at any time and usually on a budget that frankly cannot cover all of the expenses. To plan a normal disaster recovery a number of factors have to be looked at:

  1. Where will the datacenter be ?
  2. How much will rack space, power, and cooling cost ?
  3. How many and what products do we need to install ?
  4. How will we manage it ?
  5. How will we connect to it and maintain redundancy ?
  6. Who will manage it ?
  7. Do we need to hire extra staff to manage it ?

That’s just a sample of the questions to even begin to start the project. It will also take months and maybe a year to design and implement the solution and will be very costly. This is where the cloud comes in. All of the resources you need are already available, protected, and scalable. Need more data storage? No problem. Need more compute power? We have that ready too. All it takes is an email. Really, who wants to manage physical servers anyways? It’s time to start looking at data, memory, and computing as simply just resources and less like a capital investment.

Beyond this, what is to stop you from running your entire infrastructure in the cloud?  Why not pay for your infrastructure the same way you pay the company phone bill? Here is where managed cloud services come into play, rather than importing more costs into your datacenter – you are exporting that time and money to a fraction of the cost with a managed services provider. IDS is ready, willing and able – just a click away.

Photo Credit: Fractal Artist