EMC Takes Control of VCE

By | Cloud Computing, EMC, Strategy, VMware | No Comments

EMC recently announced that they were buying out most of Cisco’s interest in VCE with Cisco only retaining a 10 percent stake in the company. VCE published that they would keep their mission intact, and continue to create new solutions using their industry-leading VBlock Systems. EMC has also made headlines lately for being nominated as one of the “World’s Best Multinational Workplaces,” and for some speculation that they may be planning a reorg, which may include the formation of a new cloud business unit.

What Does The EMC Transition Mean for VCE?

While there are always different rumblings of opinions throughout an industry, many analysts maintain that the VCE transition towards becoming an EMC business is an entirely natural one, and will probably help to skyrocket their growth. In the article “EMC Buys Cisco’s Stake in VCE, Eyeing Hybrid Cloud Potential” analyst Zeus Kerravala from ZK Research explained that joint ventures are only meant to last for certain period of time.

Kerravala said “If VCE Is going to earn billions more, they are obviously going to have to find a way of growing beyond organic growth. That will probably be through mergers and acquisitions or a change of channel strategy, and it’s going to require making faster decisions.” He went on to say that since there will now be streamlined decision making under EMC, he believes it’s a good move for VCE.

Our Take on the VCE Transition to EMC

With a big industry move like this one, we wanted to talk to IDS Chief Technology Officer, Justin Mescher, and get his take on the VCE transition. Mescher explained that the move might help solidify previous marketplace suspicions.

He said,“Ever since VMware acquired Nicira in 2012 and created their own software-defined networking stack, speculation has been swirling that EMC, VMware, and Cisco would start to grow further apart. While this move seems to confirm the rumors, I think it will be a positive move overall.”

Mescher went on to explain that VCE’s biggest value has been bringing fully validated and pre-integrated systems to customers to accelerate time to value, reduce risk and increase efficiency, and that mantra of the offerings shouldn’t change.

He explained that it will be interesting to see is how the recent EMC re-structuring to create a Cloud Management and Orchestration group will impact this acquisition. EMC has proclaimed that this new business unit will focus on helping customers work in both the private and public cloud independently of the technology running underneath it. This will include EMC’s “software-defined” portfolio as well as some of their new acquisitions targeted at cloud enablement and migration.

Concluding his thoughts, Mescher said,“Could EMC take the framework and concept that VCE made successful and start to loosen some of the vendor-specific requirements? While this would certainly not be typical of EMC, if they are serious about shifting from a hardware company to focusing on the software-defined Data Center, what more impactful place to start?”

About VCE

VCE was started in 2009 as a joint venture between three of the top IT industry companies, EMC, Cisco and WMware as an effort to provide customers integrated products solutions through a single entity. In 2010 VCE introduced their Vblock Systems which provided a new approach to optimizing technology solutions for cloud computing. Since then they have continued to grow their customer portfolio and improve their solutions and be a leader in the industry. See the complete VCE history.

VNXe 3200: The Future of the VNX?

By | EMC, Storage, VNX | No Comments

I’ve been hearing from a lot of people that the VNX will eventually be similar to the VNXe. I didn’t believe EMC would do that until they came out with the VNXe 3200, but now it is looking like it is a possibility. I’ll need to provide a quick recap of the history of the VNXe and VNX to give you an understanding of why I believe the two are converging into a single platform.

emc vnx

VNX and VNXe History

For the last few years EMC’s marketing strategy has been selling the concept of a Unified VNX. The rest of us know better—the GUI is unified, but the array really isn’t. Prior to the VNX there were the NS120/480/960: CLARiiON and Celerra models that were “unified”; however, when they were first released, the GUI wasn’t even unified. Later, you could upgrade to a higher DART and FLARE code and you would get Unisphere, which then unified the GUIs (the hardware was still separate, though).

Instead of getting a unified array, you could also buy either a block-only or file-only VNX/CX. For a block-only array, Storage Processors serve data via iSCSI/FC/FCoE. On the file side, you have Data Movers that serve data via CIFS/NFS/iSCSI (VNX iSCSI via Data Movers requires an RPQ from EMC to support it, and is also hidden from the GUI).

Why is this important to know the history? Because on all VNXe models, prior to the VNXe 3200 release, iSCSI was done via the file/Celerra side. Now why is that important? Because it was and is terrible.

Breaking It Down

Here is a breakdown of some of the challenges with previous VNXe models prior to the new release:

  1. First of all, to create an iSCSI LUN on the file side, you would need to first create your RAID Groups and LUNs, then present the LUNs to the file side. Those LUNs would be marked as disk volumes on the on the file side and put into a file storage pool. After that, you would create a file system which would stripe or concatenate volumes based on the file AVM (Automatic Volume Management) algorithm. After that, you would then create your iSCSI LUN from the file system space. Long story short: there are a lot of layers and it’s the best for performance.
  2. When replicating iSCSI LUNs via the file side, you would need an additional 150% of the LUN size free on the file system on each side, source and target. To put it in perspective, if you had a 100GB iSCSI LUN, you would need a 250GB file system size on each side—which creates a lot of overhead. (Much less overhead using thin provisioning, but that slows things down.)
  3. iSCSI LUNs are limited to 2TB in size on the file side.
  4. Your only option for replication is either host-based or Replicator V2, no RecoverPoint, MirrorView, SAN Copy, etc. as is on the block side. (You can replicate your entire VNX file with RecoverPoint, but that is a terrible configuration.)
  5. For those reasons and more, I have lacked confidence in the VNXe since the beginning and cringed when having to fix them, since it always seemed there was either a replication or network problem.

The Difference

So why is the VNXe 3200 different? Well, it is different enough that I think it should have been announced as the VNXe 2, or VNXe with MCx, or in some big way like the VNX 2/VNX with MCx was announced.

There are some major differences with the VNXe 3200 and previous models.

  1. Fibre Channel ports are now available
  2. Better use of EFDs
    • FAST Cache can be used
    • Tiered pools can be used
  3. iSCSI now appears to be block based

Note: My only evidence of 3. is that when you put an iSCSI IP address on an Ethernet adapter, you can no longer use LACP on that port. This would make sense, since there is no LACP on the block side for iSCSI, only on the file side. Also, with the addition of FC ports being available (they’ve obviously been allowed access to the block side of the VNXe 3200), so that means block iSCSI would be possible too.

vnxe chart

So if I’m right about the iSCSI, that means a few things:

  1. iSCSI replication between pre-VNXe 3200 and VNXe 3200 models won’t be compatible (I asked some EMC product managers and was given a response that they can’t comment).
  2. iSCSI LUNs should be able to be replicated between a VNX and VNXe (depending on if they put MirrorView into the VNXe and at the very least you should be able to run a SAN Copy pull session to migrate it off a VNXe onto a VNX though)
  3. iSCSI LUNs might be able to be used with RecoverPoint (depending on if the VNXe has a RP splitter, but they might allow host based splitting with a VNXe and iSCSI if no splitter is embedded)


It looks like EMC is taking the VNXe in the right direction, but there are still some unknowns. Until then, it seems like a decent Unified Storage Array if you need shared storage and either didn’t want to replicate your data or you were using host-based replication. I’m hoping that if EMC chooses to do this same hardware unification with the VNX line, they get everything figured out with the VNXe first—appears they’re making the steps to do so.

Why VDI Is So Hard and What To Do About It

By | EMC, View, Virtualization | No Comments

Rapid consumerization, coupled with the availability of powerful, always-connected mobile devices and the capability for anytime, anywhere access to applications and data is fundamentally transforming the relationship of IT and the end user community for most of our customers.

IT departments are now faced with the choice to manage an incredible diversity of new devices and access channels, as well as the traditional desktops in the old way, or get out of the device management business and instead deliver IT services to the end-user in a way that aligns with changing expectations. Increasingly, our customers are turning to server-hosted virtual desktop solutions—which provide secure desktop environments accessible from nearly any device—to help simplify the problem. This strategy, coupled with Mobile Device Management tools, helps to enable BYOD and BYOC initiatives, allowing IT to provide a standardized corporate desktop to nearly any device while maintaining control.

However, virtual desktop infrastructure (VDI) projects are not without risk. This seems to be well understood, because it’s been the “year of the virtual desktop” for about four years now (actually, I’ve lost count). But we’ve seen and heard of too many VDI projects that have failed due to an imperfect understanding of the related design considerations or a lack of data-driven, fact-based decision making.

There is really only one reason VDI projects fail: The provided solution fails to meet or exceed end-user expectations. Everything else can be rationalized – for example as an operational expense reduction, capital expense avoidance, or security improvement. But a CIO who fails to meet end user expectations will either have poor adoption, decreased productivity, or an outright mutiny on his/her hands.

Meeting end-user expectations is intimately related to storage performance. That is to say, end user expectations have already been set by the performance of devices they have access to today. That may be a corporate desktop with a relatively slow SATA hard drive or a MacBook Air with an SSD drive. Both deliver dedicated I/O and consistent application latency. Furthermore, the desktop OS is written with a couple of salient underlying assumptions – that the OS doesn’t have to be a “nice neighbor” in terms of access to CPU, Memory, or Disk, and that the foreground processes should get access to any resources available.

Contrast that with what we’re trying to do in a VDI environment. The goal is to cram as many of these resource-hungry little buggers on a server as you can in order to keep your cost per desktop lower than buying and operating new physical desktops.

Now, in the “traditional” VDI architecture, the physical host must access a shared pool of disk across a storage area network, which adds latency. Furthermore, those VDI sessions are little resource piranhas (Credit: Atlantis Computing for the piranhas metaphor). VDI workloads will chew up as many as IOPS as you throw at them with no regard for their neighbors. This is also why many of our customers choose to purchase a separate array for VDI in order to segregate the workload. This way, VDI workloads don’t impact the performance of critical server workloads!

But the real trouble is that most VDI environments we’ve evaluated average a whopping 80% random write at an average block size of 4-8K.

So why is this important? In order to meet end-user expectations, we must provide sufficient IO bandwidth at sufficiently low latency. But most shared storage arrays should not be sized based on front-end IOPS requirements. They must be sized based on backend IOPS and it’s the write portion of the workload which suffers a penalty.

If you’re not a storage administrator, that’s ok. I’ll explain. Due to the way that traditional RAID works, a block of data can be read from any disk on which it resides, whereas for a write to happen, the block of data must be written to one or more disks in order to ensure protection of the data. RAID1, or disk mirroring, suffers a write penalty factor of 2x because the writes have to happen on two disks. RAID5 suffers a write penalty of 4x because for each change to the disk, we must read the data, read the parity information, then write the data and write the parity to complete one operation.

Well, mathematically this all adds up. Let’s say we have a 400 desktop environment, with a relatively low 10 IOPS per desktop at 20% read. So the front-end IOPS at steady state would be:

10 IOPS per desktop x 400 Desktops = 4000 IOPS

 If I was using 10k SAS drives at an estimated 125 IOPS per drive, I could get that done with an array of 32 SAS drives. Right?

Wrong. Because the workload is heavy write, the backend IOPS calculation for a RAID5 array would look like this:

(2 IOPS read x 400 desktops) + (8 IOPS write x 400 desktops x 4 R5 Write Penalty) IOPS

This is because 20% of the 10 IOPS are read and 80% of the IOPS are write. So the backend IOPS required here is 13,600. On those 125 IOPS drives, we’re now at 110 drives (before hot-spares) instead of 32.

But all of the above is still based on this rather silly concept that our users’ average IOPS is all we need to size for. Hopefully we’ve at least assessed the average IOPS per user rather than taking any of the numerous sizing assumptions in vendor whitepapers, e.g. Power Users all consume 12-18 IOPS “steady state”. (In fairness, most vendors will tell you that your mileage will vary.)

Most of our users are used to at least 75 IOPS (a single SATA drive) dedicated to their desktop workload. Our users essentially expect to have far more than 10 IOPS available to them should they need it, such as when they’re launching Outlook. If our goal is a user experience on par with physical, sizing to the averages is just not going to cut it. So if we use this simple sizing methodology, we need to include at least 30% headroom. So we’re up to 140 disks on our array for 400 users assuming traditional RAID5. This is far more than we would need based on raw capacity.

The fact is that VDI workloads are very “peaky.” A single user may average 12-18 IOPS once all applications are open, but opening a single application can consume hundreds or even thousands of IOPS if it’s available. So what happens when a user comes in to the office, logs in, and starts any application that generates a significant write workload—at the same time everyone else is doing the same? There’s a storm of random reads and writes on your backend, your application latency increases as the storage tries to keep up, and bad things start to happen in the world of IT.

So What Do We Do About It?

I hope the preceding discussion gives the reader a sense of respect for the problem we’re trying to solve. Now, let’s get to some ways it might be solved cost-effectively.

There are really two ways to succeed here:

1)    Throw a lot of money at the storage problem, sacrifice a goat, and dance in a circle in the pale light of the next full moon [editor’s notes: a) IDS does not condone animal sacrifice and b) IDS recommends updating your resume LinkedIn Profile in this case];

2)    Assess, Design, and Deliver Results in a disciplined fashion.

Assess, Don’t Assume

The first step is to Assess. The good news is that we can understand all of the technical factors for VDI success as long as we pay attention to end user as well as administrator experience. And once we have all the data we need, VDI is mostly a math problem.

Making data-driven fact-based decisions is critical to success. Do not make assumptions if you can avoid doing so. Sizing guidelines outlined in whitepapers, even from the most reputable vendors, are still assumptions if you adopt them without data.

You should always perform an assessment of the current state environment. When we assess the current state from a storage perspective, we are generally looking for at least a few metrics, categorized by a user persona or use case.

  • I/O Requirements (I/O per Second or IOPS)
  • I/O Patterns (Block Size and Read-to-Write Ratio)
  • Throughput
  • Storage Latency
  • Capacity Requirements (GB)
  • Application Usage Profiles

Ideally, this assessment phase involves a large statistical set and runs over a complete business cycle (we recommend at least 30 days). This is important to develop meaningful average and peak numbers.

Design for Success

There’s much more to this than just storage choices and these steps will depend upon your choice of hypervisor and virtual desktop management software, but as I put our fearless VDI implementers up a pretty big tree earlier with the IOPS and latency discussion, let’s resolve some of that.

Given the metrics we’ve gathered above, we can begin to plan our storage environment. As I pointed out above this is not as simple as multiplying the number of users times the average I/O. We also cannot size based only on averages – we need at least 30% headroom.

Of course, while we calculated the number of disks we’d need to service the backend IOPS requirements in RAID5 above, we’d look at improved storage capabilities and approaches to reduce the impact of this random write workload.

Solid State Disks

Obviously, Solid State Disks offer over 10 times the IOPS per disk than spinning disks, at greatly reduced access times due to the fact that there are no moving parts. If we took the 400 desktop calculation above and used a 5000 IOPS SSD drive as the basis for our array we’d need very few to service the IOPS.

Promising. But there are both cost and reliability concerns here. The cost per GB on SSDs is much higher and write endurance on an SSD drive is finite. (There have been many discussions of MLC, eMLC, and SLC write endurance, so we won’t cover that here).

Auto-Tiering and Caching

Caching technologies can certainly provide many benefits, including reducing the number of spindles needed to service the IOPS requirements and latency reduction.

With read caching, certain “hot” blocks get loaded into an in-memory cache or more recently, an flash-based tier. When the data is requested, instead of having to seek the data on spindles, which can incur tens of milliseconds of latency, the data is available in memory or on a faster tier of storage. So long as the cache is intelligent enough to cache the right blocks, there can be a large benefit for the read portion of the workload. Read caching is a no-brainer. Most storage vendors have options here and VMware offers a host-based Read Cache.

But VDI workloads are more write intensive. This is where write buffering comes in.

Most storage vendors have write buffers serviced by DRAM or NVRAM. Basically, the storage system acknowledges the write before the write is sent to disk. If the buffer fills up, though, latency increases as the cache attempts to flush data out to the relatively slow spinning disk.

Enter the current champion in this space, EMC’s FAST Cache, which alleviates some concerns around both read I/O and write I/O.  In this model Enterprise Flash is used to extend a DRAM Cache, so if the spindles are too busy to deal with all the I/O, the extended cache is used. Benefits to us: more content in the read cache and more writes in the buffer waiting to be coalesced and sent to disk. Of course, it’s rather more complex than that, but you get the idea.

EMC FAST Cache is ideal in applications in which there is a lot of small block random I/O – like VDI environments – and where there’s a high degree of access to the same data. Without FAST Cache, the benefit of the DRAM Cache alone is about 20%. So 4 out of every 5 I/Os has to be serviced by a slow spinning disk. With FAST Cache enabled, it’s possible to reduce the impact of read and write I/O by as much as 90%. That case would be if the FAST Cache is dedicated to VDI and all of the workloads are the largely the same. Don’t assume that this means you can leverage your existing mixed workload array without significant planning.

Ok, so if we’re using an EMC VNX2 with FAST Cache and this is dedicated only to VDI, we hope to obtain a 90% reduction of back-end write IO. Call me conservative, but I think we’ll dial that back a bit for planning purposes and then test it during our pilot phase to see where we land. We calculated 12,800 in backend write IO earlier for 400 desktops. Let’s say we can halve that. We’re now at 7200 total IOPS for 400 VDI desktops. Not bad.

Hybrid and All-Flash Arrays

IDS has been closely monitoring the hybrid-flash and all-flash array space and has selected solutions from established enterprise vendors like EMC and NetApp as well as best-of-breed newer players like Nimble Storage and Pure Storage.

The truly interesting designs recognize that SSDs should not be used as if they are traditional spinning disks. Instead these designs optimize the data layout for write. As such, even though they utilize RAID technology, they do not incur a meaningful write penalty, meaning that it’s generally pretty simple to size the array based on front-end IOPS. This also reduces some of the concern about write endurance on the SSDs. When combined with techniques which both coalesce writes and compress and de-duplicate data in-line, these options can be attractive on a cost-per-workload basis even though the cost of Flash remains high.

Using a dedicated hybrid or flash-based array would get us to something like a single shelf needed for 400 users. At this point, we’re more sizing for capacity than I/O and latency, a situation that’s more familiar to most datacenter virtualization specialists. But we’re still talking about an approach with a dedicated array at scale.

Host-Based Approaches

A variety of other approaches to solving this problem have spring up, including the use of host-based SSDs to offload portions of the IO, expensive Flash memory cards providing hundreds of thousands of I/O’s per card, and software approaches such as Atlantis Computing’s ILIO virtual appliances which leverage relatively inexpensive system RAM as a low-latency de-duped data store and functionally reduce VDI’s impact on existing storage.  (Note: IDS is currently testing the Atlantis Computing solution in our Integration Lab).

Design Conclusion

Using a combination of technology approaches, it is now possible to provide VDI user experience that exceeds current user expectations at a cost per workload less than the acquisition cost of a standard laptop. The server-hosted VDI approach has many benefits in terms of operational expense reduction as well as data security.

Delivering Results

In this article, we’ve covered one design dimension that influences the success of VDI projects, but there’s much more to this than IOPS and latency. A disciplined engineering and delivery methodology is the only way to deliver results reliably for your VDI project. At minimum, IDS recommends testing your VDI environment at scale using tools such as LoginVSI or View Planner as well as piloting your solution with end user champions.

Whether you’re just getting started with your VDI initiative, or you’ve tried and failed before, IDS can help you achieve the outcomes you want to see. Using our vendor-agnostic approach and disciplined methodology, we will help you reduce cost, avoid business risk, and achieve results.

We look forward to helping you.


Photo credit: linademartinez via Flickr

RecoverPoint 4.0 Review: Huge Improvements On An Already Great Product

By | EMC, Replication | No Comments

recoverpoint 4 - 1

I’m sure that most people reading this already know at least a little bit of how RecoverPoint works—and probably even know about some of the new features in 4.0. I’ll do a short review on how it works, and then dive into a review the new features.

RecoverPoint Replication: A Refresher

For those that are familiar with different replication technologies, but not RecoverPoint: let me just say that it is, in my humble opinion, the best replication product on the market right now for block data. This doesn’t just go for EMC arrays; RecoverPoint can be used with any supported array (HDS, NetApp, IBM, 3PAR, etc) behind an EMC VPLEX.

Prior to RecoverPoint, you would need to use either MirrorView or SANCopy for an EMC CLARiiON to replicate data between arrays, and SRDF for a Symmetrix/VMAX. These technologies are comparable to other vendors current replication technologies. Typically, most replication technologies have the ability to be synchronous or asynchronous, and the same goes for RecoverPoint. The big difference is in the rollback capability. Other technologies require the use of clones and/or snapshots to be able to recover from more than one point in time.

The image below shows an example of the difference between RecoverPoint, backups, and snapshots: almost any point in time to choose to recover from with RecoverPoint, versus very few recovery points for snapshots or backups. This is very commonly referred to by EMC as a “DVR like functionality.”

recoverpoint 4 rpo - 2

The other feature to talk about when talking about replication products is how to test your DR copies so you can be sure your failover will work in case you need to use it. With RecoverPoint it is a super easy point and click GUI (you can use the CLI if you really want to) to test a copy.

RecoverPoint 4.0: Review of the New Features

A Completely New GUI

RecoverPoint has changed from an application to a web-based client. From my experience, it isn’t quite as intuitive as the old version. A screenshot of the new GUI is below.

recoverpoint 4 gui - 3

Deeper Integration with VMware Site Recovery Manager

There is now the ability to test or failover to any point in time. This is a huge change, previously with SRM you would only be able to use the latest copy, so the main advantage of RecoverPoint (almost any point in time) wasn’t there when integrated with SRM.

Virtual RPAs

Virtual machines running RecoverPoint software. Sounds like a really good and neat idea, but very limited on functionality. The two biggest limitations – only available when used with iSCSI (hosts can be connected via FC though, but be careful as EMC doesn’t support the same LUN accessed by both FC and iSCSI at the same time) and only available with RP/SE (RP/SE is the VNX only license variant of RecoverPoint). Also the performance of these vRPAs depends on the amount of resources you give them.

Synchronous Replication Over IP

If you have a fast enough IP WAN connection you can now use synchronous mode via IP. The benefit to this is obvious, the exact same data on your production array is on the DR array too. All of the considerations with synchronous replication still exist, the round trip latency added may cause a noticeable performance impact to clients.

Centralized Target

You can now have up to four sites replicating to a single site. This is a huge change as it allows you to minimize the cost and hardware requirements of having multiple sites. Prior to RecoverPoint 4.0 you would’ve needed four different RecoverPoint clusters each with their own set of RPAs to accomplish the same thing now possible.

Multiple Targets

And you can also replicate a single source to up to four targets if you want. I don’t see this quite as impactful as the being able to replicate to a centralized target, but it depends on how many copies of your data you want and how many sites you want to have protected against failure.

recoverpoint 4 - 4

Supported Splitters

Not really a new feature, more of a kick in the pants to anyone that used switched-based splitting (and those that had to learn how to install and support it). Using a switch-based splitter isn’t supported in RecoverPoint 4.0. Your options now are VMAX, VNX/CX, and VPLEX splitters.


Not really a new feature either, but very important to know the differences between the versions. If you plan on using the multiple-site replication, you will need to use RecoverPoint/EX or RecoverPoint/CL licensing.

There are some more new features, as well as performance and limitation enhancements, but the above list includes most of the big changes.

EMC VNX2 Review: New Hardware and a Big Change to Software for Next Generation VNX

By | EMC, Storage | No Comments

This review will highlight some of the new changes and features of the next generation VNX.


The new VNX series—referred to as the VNX2, VNX MCx, and the next generation VNX—comes with a major software update and refreshed hardware. I’ll refer to it as the VNX2.

All the new models (5200, 5400, 5600, 5800, 7600, and 8000) come with an Intel Sandy Bridge processor, more cores, more RAM, and optimization for multiple cores.

Below is a graph of how the different core utilization might look with the VNX and VNX2 models.

The new hardware and software allows the VNX with MCx to achieve up to 1 million IOPs.


Active/Active LUNs

If you choose to use traditional RAID Groups, this new feature will potentially improve performance for those LUNs by servicing IO out both storage processors at the same time. In the end, this improvement in its current state probably won’t mean a lot to many customers, as the focus is on shifting to pools. The exciting part is that they were actually able to make traditional RAID Group LUNs active/active, so maybe we will see pool LUNs be active/active in the future.


FAST, FAST Cache, and Cache


It works the same as it used to, with the exception of it now works at a 256MB ‘chunk’ level instead of a 1GB ‘chunk’. This allows for greater efficiency of data placement. If for example you had a pool with 1TB of SSD and 30TB of SAS/NLSAS on a VNX you obviously have a very limited amount of SSD space and you want to make the best use of that space. The way the VNX would tier the data would be at a 1GB chunk level so if a fraction of that 1GB, say 200MB was the actual hot data, 824MB would be promoted to SSD unnecessarily. On the VNX2 using the 256MB chunk only 56MB would be promoted unnecessarily – perfect? Obviously not. Better? Yes. If we took this example and multiply it by 10 – you’d have around 8GB of unnecessarily promoted data to SSD on the VNX and only 560MB unnecessarily promoted on the VNX2.

FAST Cache

Major improvements here as well. The warm up time has been improved by changing the behavior that when the capacity of FAST Cache is less than 80% utilized, any read or write will promote the data to FAST Cache. After it is 80% full it returns to the original behavior of being read or written to 3 times prior to promotion.


Also on the topic of cache, the read and write cache of the storage processors no longer need to have their cache levels set. The cache now adjusts automatically to whatever the array thinks is best for the workload on the array. This is great news: no longer need to mess with high and low water marks or what values the read and write cache should be set at.


Deduplication and Thin LUNs

Another huge change to VNX pools is the ability to do out-of-band block based deduplication.

This sounds great; however, with this comes considerations. First, it only works on thin pool LUNs. EMC’s recommendation for using thin LUNs has always been to not use them for LUNs that require a low response time and produce a lot of IOPs. With the VNX2 performance improvements, the thin LUN performance impact may be less. However, I haven’t seen a comparison between the two to be able to say whether or not it has improved with the new code. Also EMC’s recommendations for using deduplication on block LUNs is only on LUNs with less than 30% writes, non-sequential, and small random IOs (smaller than 32KB).

The other recommendation is that you test it on non-production before enabling it on production. Does that mean you make a copy of your production workload and then simulate your production workload on your new copy? I’d say so, as ideally you’d want an exact duplicate of your production environment. So would you buy enough drives to have a duplicate pool, in order to have the exact percentage of drives to be able to simulate how everything would work? Maybe. Or you could just enable it and hope it works—but that would mean you should have a very good understanding of your workload before enabling it.

However, if you do choose to use deduplication and it doesn’t work out, you can always reduplicate the data and go back to a normal thin LUN. If you want to go back to a thick LUN, you would then need to do a LUN migration to go back to a thick LUN.

Also, when using the GUI to create a LUN in a pool, ‘thin’ is checked by default now. If you’re not careful and you don’t want this, you may end up over provisioning your pool without knowing it. While thin provisioning is not a new feature, enabling it by default is new.

This is not something to take lightly. A lot of people will make LUNs until the free space runs out. With thin LUNs, your free space doesn’t run out until you actually have data on those LUNs, so you can very easily over provision your pool without knowing it. So if you have a 10TB pool, you could very quickly provision 20TB and not know it. It becomes a problem when you’ve used up that 10TB, because your hosts think they have 20TB available. Once the pool becomes full, your hosts still think they can write data even though they can’t. This usually results in the host crashing. So you would need to expand your pool prior to it filling up, which means you need to monitor it closely—but the problem is you might not know you need to, if you don’t know you’re making thin LUNs.

Hot Spares

The way the hot spares in the VNX with MCx works has changed quite a bit. There are no hot spare drives now: you simply don’t provision all the drives, and a blank drive becomes a hot spare. Next, instead of equalizing, the process of the hot spare copying back to the drive that was replaced, like the VNX does, the VNX2 does permanent sparing. When a drive fails, after 5 minutes, the data is copied from the failed/failing drive if possible, otherwise rebuilt using parity to an empty drive. The 5 minute feature is new as well, which allows you to move drives to new enclosures/slots if desired.

Since the drive doesn’t equalize, if you want to have everything contiguous or laid out in a specific manor you would need to manually move the data back to the replaced drive. This is important for example if you have any R10 RAID Groups/pools you don’t want to have it spread across Bus 0 Enclosure 0 and any other enclosures. Also the vault drives work a little different, only the user data is copied to the new drive, so upon vault drive replacement you should definitely move it back manually (if you use the vault drives for data).


High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

Advice from the Expert, Best Practices in Utilizing Storage Pools

By | Backup, Cisco, Data Loss Prevention, EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Storage Pools for the CX4 and VNX have been around a while now, but I continue to still see a lot of people doing things that are against best practices. First, let’s start out talking about RAID Groups.

Traditionally to present storage to a host you would create a RAID Group which consisted of up to 16 disks, the most typical used RAID Groups were R1/0, R5, R6, and Hot Spare. After creating your RAID Group you would need to create a LUN on that RAID Group to present to a host.

Let’s say you have 50 600GB 15K disks that you want to create RAID Groups on, you could create (10) R5 4+1 RAID Groups. If you wanted to have (10) 1TB LUNs for your hosts you could create a 1TB LUN on each RAID Group, and then each LUN has the guaranteed performance of 5 15K disks behind it, but at the same time, each LUN has at max the performance of 5 15K disks.
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] What if your LUNs require even more performance?

1. Create metaLUNs to keep it easy and effective.

2. Make (10) 102.4GB LUNs on each RAID Group, totaling (100) 102.4GB LUNs for your (10) RAID Groups.

3. Select the meta head from a RAID Group and expand it by striping it with (9) of the other LUNs from other RAID Groups.

4. For each of the other LUNs to expand you would want to select the meta head from a different RAID Group and then expand with the LUNs from the remaining RAID Groups.

5. That would then provide each LUN with the ability to have the performance of (50) 15K drives shared between them.

6. Once you have your LUNs created, you also have the option of turning FAST Cache (if configured) on or off at the LUN level.

Depending on your performance requirement, things can quickly get complicated using traditional RAID Groups.

This is where CX4 and VNX Pools come into play.
[/framed_box] EMC took the typical RAID Group types – R1/0, R5, and R6 and made it so you can use them in Storage Pools. The chart below shows the different options for the Storage Pools. The asterisks notes that the 8+1 option for R5 and the 14+2 option for R6 are only available in the VNX OE 32 release.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inNow on top of that you can have a Homogeneous Storage Pool – a Pool with only like drives, either all Flash, SAS, or NLSAS (SATA on CX4), or a Heterogeneous Storage Pool – a Storage Pool with more than one tier of storage.

If we take our example of having (50) 15K disks using R5 for RAID Groups and we apply them to pools we could just create (1) R5 4+1 Storage Pool with all (50) drives in it. This would then leave us with a Homogeneous Storage Pool, visualized below.High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

The chart to the right displays what will happen underneath the Pool as it will create the same structure as the traditional RAID Groups. We would end up with a Pool that contained (10) R5 4+1 RAID Groups underneath that you wouldn’t see, you would only see the (1) Pool with the combined storage of the (50) drives. From there you would create your (10) 1TB LUNs on the pool and it will spread the LUNs across all of the RAID Groups underneath automatically. It does this by creating 1GB chunks and spreading them across the hidden RAID Groups evenly. Also you could turn FAST Cache on or off at the Storage Pool level (if configured).

On top of that, the other advantage to using a Storage Pool is the ability to create a Heterogeneous Storage Pool, which allows you to have multiple tiers where the ‘hot’ data will move up to the faster drives and the ‘cold’ data will move down to the slower drives.

Jon Blog photo 4Another thing that can be done with a Storage Pool is create thin LUNs. The only real advantage of thin LUNs is to be able to over provision the Storage Pool. For example if your Storage Pool has 10TB worth of space available, you could create 30TB worth of LUNs and your hosts would think they have 30TB available to them, when in reality you only have 10TB worth of disks.

The problem with this is when the hosts think they have more space than they really do and when the Storage Pool starts to get full, there is the potential to run out of space and have hosts crash. They may not crash but it’s safer to assume that they will crash or data will become corrupt because when a host tries to write data because it thinks it has space, but really doesn’t, something bad will happen.

In my experience, people typically want to use thin LUNs only for VMware yet will also make the Virtual Machine disk thin as well. There is no real point in doing this. Creating a thin VM on a thin LUN will grant no additional space savings, just additional overhead for performance as there is a performance hit when using thin LUNs.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inAfter the long intro to how Storage Pools work (and it was just a basic introduction, I left out quite a bit and could’ve gone over in detail) we get to the part of what to do and what not to do.

Creating Storage Pools

Choose the correct RAID Type for your tiers. At a high level – R1/0 is for high write intensive applications, R5 is high read, and R6 is typically used on large NLSAS or SATA drives and highly recommended to use on those drive types due to the long rebuild times associated with those drives.

Use the number of drives in the preferred drive count options. This isn’t always the case as there are ways to manipulate how the RAID Groups underneath are created but as a best practice use that number of drives.

Keep in mind the size of your Storage Pool. If you have FAST Cache turned on for a very large Storage Pool and not a lot of FAST Cache, it is possible the FAST Cache will be used very ineffectively and be inefficient.

If there is a disaster, the larger your Storage Pool the more data you can lose. For example, if one of the RAID Groups underneath having a dual drive fault if R5, a triple drive fault in R6, or the right (2) disks in R1/0.

Expanding Storage Pools

Use the number of drives in the preferred drive count options. If it is on a CX4 or a VNX that is pre VNX OE 32, the best practice is to expand by the same number of drives in the tier that you are expanding as the data will not relocate within the same tier. If it is a VNX on at least OE 32, you don’t need to double the size of the pool as the Storage Pool has the ability to relocate data within the same tier of storage, not just up and down tiers.

Be sure to use the same drive speed and size for the tier you are expanding. For example, if you have a Storage Pool with 15K 600GB SAS drives, you don’t want to expand it with 10K 600GB SAS drives as they will be in the same tier and you won’t get consistent performance across that specific tier. This would go for creating Storage Pools as well.

Graphics by EMC

Letting Cache Acceleration Cards Do The Heavy Lifiting

By | EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Up until now there has not been a great deal of intelligence around SSD Cache cards and flash arrays because they have primarily been configrued as DAS (Direct Attach Storage). By moving read intensive workload up to the server off of a storage array, both individual application performance as well as overall storage performance can be enhanced. There are great benefits to using SSD Cache cards in new ways yet before exploring new capabilities it is important to remember the history of the products.

The biggest problem with hard drives either local or SAN based is that they have not been able to keep up with Moore’s Law of Transistor Density. In 1965 Gordon Moore, a co-founder of Intel, made the observation that the number of components in integrated circuits doubled every year – he later (in 1975) adjusted that prediction to doubling every two years. So, system processors (CPUs), memory (DRAM), system busses, and hard drive capacity have been doubling in speed every two years, but hard drives performance has stagnated because of mechanical limitations. (mostly heat, stability, and signaling reliability from increasing spindle speeds) This effectively limits individual hard drives to 180 IOPs or 45MB/sec under typical random workloads depending on block sizes.

The next challenge is that in an effort to consolidate storage, increase the number of spindles, availability and efficiency we have pulled the storage out of our servers and placed that data on SAN arrays. There is tremendous benefit to this, however doing this introduces new considerations. The network bandwidth is 1/10th of the system bus interconnect (8Gb FC = 1GB/sec vs PCIe 3.0 x16 = 16GB/sec). An array may have 8 or 16 front-end connections yielding and aggregate of 8-16GB/sec where a single PCIe slot has the same amount of bandwidth. The difference is the array and multiple servers share its resources and each can potentially impact the other.

Cache acceleration cards address both the mechanical limitations of hard drives and the shared-resource conflict of storage networks for a specific subset of data. These cards utilize NAND flash (either SLC or MLC, but more on that later) memory packaged on a PCIe card with an interface controller to provide high bandwidth and throughput for read intensive workloads on small datasets of ephemeral data.

[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] I realize there was a lot of qualification statements there so lets break it down…

  • Why read intensive? As compared to SLC NAND flash, MLC NAND flash has a much higher write penalty making writes more costly in terms of time and overall life expectancy of a drive/card.
  •  Why small datasets? Most Cache acceleration cards are fairly small in comparison to hard drives. The largest top out at ~3TB (typical sizes are 300-700GB) and the cost per GB is much much higher than comparable hard drive storage.
  •  Why ephemeral data and what does that mean? Ephemeral data is data that is temporary, transient, or in process. Things like page files, SQL server TEMPDB, or spool directories.
[/framed_box] Cache acceleration cards address the shared-resource conflict by pulling resource intense activities back onto the server and off of the SAN arrays. How this is accomplished is the key differentiator of the products available today.

SSD Caching , EMC, VFCache, FusionIO, VMWare, SLC, MLC NAND Flash, Gordon Moore, Intel, processors, CPU's, DRAM

FusionIO is one of the companies that has made a name for themselves early in the enterprise PCI and PCIe Flash cache acceleration market. Their solutions have been primarily DAS(Direct Attach Storage) solutions based on SLC and MLC NAND Flash. In early 2011 FusionIO released write-through caching to their SSD cards with their acquisition of ioTurbine software to accelerate VMWare guest performance. More recently – Mid-2012 – FusionIO released their ION enterprise flash array – which consists of a chassis containing several of their PCIe cards. They leverage RAID protection across these cards for availability. Available interconnects include 8Gb FC and Infiniband. EMC release VFCache in 2012 and has subsequently released two additional updates.

The EMC VFCache is a re-packaged Micron P320h or LSI WarpDrive PCIe SSD with a write-through caching driver targeted primarily at read intensive workloads. In the subsequent releases they have enhanced VMWare functionality and added the ability to run in “split-card” mode with half the card utilized for read caching and the other half as DAS. EMC’s worst kept secret is their “Project Thunder” release of the XTremIO acquisition. “Project Thunder” is an all SSD array that will support both read and write workloads similar to the FusionIO ION array.

SSD Caching solutions are an extremely powerful solution to very specific workloads. By moving read intensive workload up to the server off of a storage array, both individual application performance as well as overall storage performance can be enhanced. The key to determining whether or not these tools will help is careful analysis around reads vs writes, and the locality of reference of active data. If random write performance is required consider SLC based cards or caching arrays over MLC.


Images courtsey of “the register” and “IRRI images


Review: Big Benefits to Using EMC VPLEX Local

By | EMC, Storage, Uncategorized | No Comments

EMC’s VPLEX is a very powerful tool whether it’s deployed in a Local, Metro, or Geo configuration. What everyone always seems to talk about is the benefits of the VPLEX Metro configuration to the data center. To this point, it is a big benefit. You could have an entire data center go offline yet if you’ve deployed VPLEX Metro and mirrored everything, you could have everything continue running on the other data center. It would be possible for no one to even notice that one of the sites went offline.

The below picture shows an overview of what a VPLEX Metro configuration would look like.


From my professional experience, what no one seems to talk about is the benefits of VPLEX Local. Even if you have a single array, say a CX3-40, a VPLEX Local installation will one day help you. The key is to have it installed!

The below picture shows an overview of what a VPLEX Local configuration would look like.


So… why do I like VPLEX Local so much even if you only have a single array? Well, let’s address why it’s not going to help you.

It will NOT provide any additional redundancy to your infrastructure even if you set up everything properly. This is another thing to configure so there is always the chance of setting it up improperly.


What are the benefits I see from having a VPLEX Local installation?

  1. It has a large cache that sits in-between the host and the array.
  2. If you don’t have a DR location currently yet will have one in the future, you have more options on how to get data to the DR site. You can do VPLEX Geo, Metro, or use RecoverPoint with VPLEX.
  3. If you want to mirror your array and have array failover capability within the same datacenter, you already have the VPLEX setup and won’t have to reconfigure anything.
  4. It is a single point to connect all of your hosts as the VPLEX acts as storage to host and acts as a host to storage. If you have more than one array you don’t have to worry about connecting your hosts to different storage array vendors and getting the connection settings correct. You simply have one place to do it all.
  5. One of the biggest reasons (as if the above reasons weren’t enough) is that you never have to take down time for a migration again. If you read this and weren’t excited then you haven’t done as many migrations as I have. They’re long. Planning takes weeks as migrating itself takes weeks or months. You have to have a lot of people involved and downtime is required. Downtime is usually a long process as it is not completed in just one night, but more like four to six hours one night every week for three months!

    Usually the cutovers happen on Friday or Saturday night and nobody wants to do this. Occasionally things don’t go as planned and you don’t get as much done as you anticipated or there was a setback. The setbacks could be related to any systems no working properly or to something like a key employee forgetting they had to attend a wedding that weekend so you have to push off that weeks migration. I’ve seen it all.

Migrations are complicated and they cost a lot of money to hire someone to do it. As much as you trust your employees how often do they do migrations, once every four years? Wouldn’t you rather have the peace of mind paying someone else who does this professionally? You will need to hire someone that does migrations often and they don’t come cheap.


How does having a VPLEX Local fix this?


Let’s assume you already have it installed, running, and your hosts have storage presented from it (as you should). The next step is for you to buy a new array, configure the array then present the new storage to the VPLEX. After this you go to the VPLEX and you mirror everything from the old array to the new array. Once that is done you take away the original leg of the mirror (old array) and you’re done. No down time, hardly any planning and no one has to work late from your company. You also save a ton of money as you don’t have to pay someone else to do it for you.

Tech For Dummies: Cisco MDS 9100 Series Zoning & EMC VNX Host Add A “How To” Guide

By | Cisco, EMC, How To | No Comments

Before we begin zoning please make sure you have cabled each HBA to both switches assuring the host is connected to each switch. Now let’s get started …

Configuring and Enabling Ports with Cisco Device Manager:

Once your HBAs are connected we must first Enable and Configure the ports.

1. Open Cisco Device Manager to enable port:

[iframe src=”” width=”335″ height=”435″]

2. Type in the IP address, username and password of the first switch:

[iframe src=”” width=”335″ height=”335″]


3. Right-click the port you attached FC cable to and select enable:

[iframe src=”” width=”435″ height=”255″]

Cisco allows the usage of multiple VSANs (Virtual Storage Area Network). If you have created a VSAN other than VSAN 1 you must configure the port for the VSAN you created.

1. To do this, right-click the port you enabled and select “Configure”:

[iframe src=”” width=”335″ height=”335″]

2. When the following screen appears, click on Port VSAN and select your VSAN, then click “Apply”:

[iframe src=”” width=”635″ height=”335″]

3. Save your configuration by clicking on “Admin” and selecting “Save Configuration”, once the “Save Configuration” screen pops up and requests you to select “Yes”:

[iframe src=”” width=”635″ height=”435″]

[iframe src=”” width=”335″ height=”135″]

Once you have enabled and configured the ports, we can now zone your Hosts HBAs to the SAN.

Login to Cisco Fabric Manager:

1. Let’s begin by opening Cisco Fabric Manager:

[iframe src=”” width=”235″ height=”435″]

2. Enter FM server username and password (EMC Default admin; password) , then clock “Login”:

[iframe src=”” width=”335″ height=”335″]

3. Highlight the switch you intend to zone and select “Open”:

[iframe src=”” width=”635″ height=”335″]

4. Expand the switch and right-click “VLAN”, then select “Edit Local Full Zone Database”:

[iframe src=”” width=”635″ height=”435″]

Creating An FC Alias:

In order to properly manage your zones and HBAs, it is important to create an “FC Alias” for the WWN of each HBA. The following screen will appear:

1. When it does, right-click “FC-Aliases” and select “Insert”, once selected the next screen will appear. Type in the name of the host and HBA ID, example: SQL_HBAO. Click the down arrow and then select the WWN that corresponds to your server, finally click “OK”:

[iframe src=”” width=”635″ height=”635″]

Creating Zones:

Now that we have created FC-Aliases, we can now move forward creating zones. Zones are what isolates connectivity among HBAs and targets. Let’s begin creating zones by:

1. Right-clicking on “Zones”.
2. Select “Insert” from the drop down menu. A new screen will appear.
3. Type in the name of the “Zone”, for management purposes use the following format <name of FC-Alias host>_<Name of FC Alias Target> Example: SQL01_HBAO_VNX_SPAO.
4. Click “OK”:

[iframe src=”” width=”635″ height=”635″]

Note: These steps must be repeated to zone the hosts HBA to the second storage controller. In our case, VNX_SPB1.

Adding Members to Zones:

Once the Zones names are created, insert the aliases into the Zones:

5. Right-click on the Zone you created.
6. Select “Insert”, and a new screen will appear.
7. Select “FC-Alias”, click on “…” box then select Host FC Alias.
8. Select the target FC Alias, click “OK”, and click “Add”:

[iframe src=”” width=”635″ height=”335″]

[iframe src=”” width=”635″ height=”335″]

Creating Storage Groups:

Now that we have zoned the HBAs to the array, we can allocate storage to your hosts. To do this we must create “Storage Groups”, which will give access to LUNs in the array to the hosts connected to that array. Let’s begin by logging into the array and creating “Storage Groups”:

1. Login to Unisphere and select the array from the dashboard:

[iframe src=””  width=”335″ height=”335″]

2. Select “Storage Groups” under the Hosts tab:

[iframe src=”” width=”635″ height=”285″]

3. Click “Create” to create a new storage group:

[iframe src=”” width=”635″ height=”385″]

4. The following screen will appear, type in the name of the storage group. Typically you will want to use the name of the application or hosts cluster name.

[iframe src=”” width=”435″ height=”235″]

5. The screen below will pop up, at this time click “Yes” to continue and add LUNs and Hosts to the Storage Group:

[iframe src=”” width=”435″ height=”235″]

6. The next screen will allow you to select wither newly created LUNs or LUNs that already exist in other Storage Groups. Once you add the LUN or LUNs to the group, click on the hosts tab to continue to add hosts:

[iframe src=”” width=”635″ height=”635″]

7. In the hosts tab, select the Hosts we previously zoned and click on the forward arrow. Once the host appears in the right pane, click OK:

[iframe src=”” width=”635″ height=”635″]

8. At this point a new screen will pop up, click YES to commit.

[iframe src=”” width=”435″ height=”285″]

Once you have completed these tasks successfully, your hosts will see new raw devices. From this point on, use your OS partitioning tool to create volumes.

Photo Credit: imagesbywestfall

Protecting Exchange 2010 with EMC RecoverPoint and Replication Manager

By | Backup, Deduplication, Disaster Recovery, EMC, Replication, Storage | No Comments

Regular database backups of Microsoft Exchange environments are critical to maintaining the health and stability of the databases. Performing full backups of Exchange provides a database integrity checkpoint and commits transaction logs. There are many tools which can be leveraged to protect Microsoft Exchange environments, but one of the key challenges with traditional backups is the length of time that it takes to back up prior to committing the transaction logs.

Additionally, the database integrity should always be checked prior to backing up: to ensure the data being backed up is valid. This extended time often can interfere with daily activities – so it usually must be scheduled around other maintenance activities, such as daily defragmentation. What if you could eliminate the backup window time?

EMC RecoverPoint in conjunction with EMC Replication Manager can create application consistent replicas with next to zero impact, that can be used for staging to tape, direct recovery, or object level recovery with Recovery Storage Groups or third party applications. These replicas leverage Microsoft VSS technology to freeze the database, RecoverPoint bookmark technology to mark the image  time in the journal volume, and then thaw the database in a matter of less then thirty seconds – often in less than five seconds.

EMC Replication Manager is aware of all of the database server roles in the Microsoft Exchange 2010 Database Availability Group (DAG) infrastructure and can leverage any of the members (Primary, Local Replica, or Remote Replica) to be a replication source.

EMC Replication Manager automatically mounts the bookmarked replica images to a mount host running the Microsoft Exchange tools role and the EMC Replication Manager agent. The database and transaction logs are then verified using the essentials utility provided with the Microsoft Exchange tools. This ensures that the replica is a valid, recoverable copy of the database. The validation of the databases can take from a few minutes to several hours, depending on the number and size of databases and transaction log files. The key is: the load from this process does not impact the production database servers. Once the verification completes, EMC Replication Manager calls back to the production database to commit and delete the transaction logs.

Once the Microsoft Exchange database and transaction logs are validated, the files can be spun off to tape from the mount host, or depending on the retention requirement – you could eliminate tape backups of the Microsoft Exchange environment completely. Depending on the write load on the Microsoft Exchange server and how large the journal volumes for RecoverPoint are, you can maintain days or even weeks of retention/recovery images in a fairly small footprint – as compared to disk or tape based backup.

There are a number of recovery scenarios that are available from a solution based on RecoverPoint and Replication Manager. The images can be reversed synchronized to the source – this is a fast delta-based copy, but is data destructive. Alternatively, the database files could be copied from the mount host to a new drive and mounted as a recovery storage group on the Microsoft Exchange server. The database and log files can also be opened on the mount host directly with tools such as Kroll OnTrack for mailbox and message-level recovery.

Photo Credit: pinoldy