All Posts By

John Lukavsky

VNXe 3200: The Future of the VNX?

By | EMC, Storage, VNX | No Comments

I’ve been hearing from a lot of people that the VNX will eventually be similar to the VNXe. I didn’t believe EMC would do that until they came out with the VNXe 3200, but now it is looking like it is a possibility. I’ll need to provide a quick recap of the history of the VNXe and VNX to give you an understanding of why I believe the two are converging into a single platform.

emc vnx

VNX and VNXe History

For the last few years EMC’s marketing strategy has been selling the concept of a Unified VNX. The rest of us know better—the GUI is unified, but the array really isn’t. Prior to the VNX there were the NS120/480/960: CLARiiON and Celerra models that were “unified”; however, when they were first released, the GUI wasn’t even unified. Later, you could upgrade to a higher DART and FLARE code and you would get Unisphere, which then unified the GUIs (the hardware was still separate, though).

Instead of getting a unified array, you could also buy either a block-only or file-only VNX/CX. For a block-only array, Storage Processors serve data via iSCSI/FC/FCoE. On the file side, you have Data Movers that serve data via CIFS/NFS/iSCSI (VNX iSCSI via Data Movers requires an RPQ from EMC to support it, and is also hidden from the GUI).

Why is this important to know the history? Because on all VNXe models, prior to the VNXe 3200 release, iSCSI was done via the file/Celerra side. Now why is that important? Because it was and is terrible.

Breaking It Down

Here is a breakdown of some of the challenges with previous VNXe models prior to the new release:

  1. First of all, to create an iSCSI LUN on the file side, you would need to first create your RAID Groups and LUNs, then present the LUNs to the file side. Those LUNs would be marked as disk volumes on the on the file side and put into a file storage pool. After that, you would create a file system which would stripe or concatenate volumes based on the file AVM (Automatic Volume Management) algorithm. After that, you would then create your iSCSI LUN from the file system space. Long story short: there are a lot of layers and it’s the best for performance.
  2. When replicating iSCSI LUNs via the file side, you would need an additional 150% of the LUN size free on the file system on each side, source and target. To put it in perspective, if you had a 100GB iSCSI LUN, you would need a 250GB file system size on each side—which creates a lot of overhead. (Much less overhead using thin provisioning, but that slows things down.)
  3. iSCSI LUNs are limited to 2TB in size on the file side.
  4. Your only option for replication is either host-based or Replicator V2, no RecoverPoint, MirrorView, SAN Copy, etc. as is on the block side. (You can replicate your entire VNX file with RecoverPoint, but that is a terrible configuration.)
  5. For those reasons and more, I have lacked confidence in the VNXe since the beginning and cringed when having to fix them, since it always seemed there was either a replication or network problem.

The Difference

So why is the VNXe 3200 different? Well, it is different enough that I think it should have been announced as the VNXe 2, or VNXe with MCx, or in some big way like the VNX 2/VNX with MCx was announced.

There are some major differences with the VNXe 3200 and previous models.

  1. Fibre Channel ports are now available
  2. Better use of EFDs
    • FAST Cache can be used
    • Tiered pools can be used
  3. iSCSI now appears to be block based

Note: My only evidence of 3. is that when you put an iSCSI IP address on an Ethernet adapter, you can no longer use LACP on that port. This would make sense, since there is no LACP on the block side for iSCSI, only on the file side. Also, with the addition of FC ports being available (they’ve obviously been allowed access to the block side of the VNXe 3200), so that means block iSCSI would be possible too.

vnxe chart

So if I’m right about the iSCSI, that means a few things:

  1. iSCSI replication between pre-VNXe 3200 and VNXe 3200 models won’t be compatible (I asked some EMC product managers and was given a response that they can’t comment).
  2. iSCSI LUNs should be able to be replicated between a VNX and VNXe (depending on if they put MirrorView into the VNXe and at the very least you should be able to run a SAN Copy pull session to migrate it off a VNXe onto a VNX though)
  3. iSCSI LUNs might be able to be used with RecoverPoint (depending on if the VNXe has a RP splitter, but they might allow host based splitting with a VNXe and iSCSI if no splitter is embedded)

Conclusion

It looks like EMC is taking the VNXe in the right direction, but there are still some unknowns. Until then, it seems like a decent Unified Storage Array if you need shared storage and either didn’t want to replicate your data or you were using host-based replication. I’m hoping that if EMC chooses to do this same hardware unification with the VNX line, they get everything figured out with the VNXe first—appears they’re making the steps to do so.

RecoverPoint 4.0 Review: Huge Improvements On An Already Great Product

By | EMC, Replication | No Comments

recoverpoint 4 - 1

I’m sure that most people reading this already know at least a little bit of how RecoverPoint works—and probably even know about some of the new features in 4.0. I’ll do a short review on how it works, and then dive into a review the new features.

RecoverPoint Replication: A Refresher

For those that are familiar with different replication technologies, but not RecoverPoint: let me just say that it is, in my humble opinion, the best replication product on the market right now for block data. This doesn’t just go for EMC arrays; RecoverPoint can be used with any supported array (HDS, NetApp, IBM, 3PAR, etc) behind an EMC VPLEX.

Prior to RecoverPoint, you would need to use either MirrorView or SANCopy for an EMC CLARiiON to replicate data between arrays, and SRDF for a Symmetrix/VMAX. These technologies are comparable to other vendors current replication technologies. Typically, most replication technologies have the ability to be synchronous or asynchronous, and the same goes for RecoverPoint. The big difference is in the rollback capability. Other technologies require the use of clones and/or snapshots to be able to recover from more than one point in time.

The image below shows an example of the difference between RecoverPoint, backups, and snapshots: almost any point in time to choose to recover from with RecoverPoint, versus very few recovery points for snapshots or backups. This is very commonly referred to by EMC as a “DVR like functionality.”

recoverpoint 4 rpo - 2

The other feature to talk about when talking about replication products is how to test your DR copies so you can be sure your failover will work in case you need to use it. With RecoverPoint it is a super easy point and click GUI (you can use the CLI if you really want to) to test a copy.

RecoverPoint 4.0: Review of the New Features

A Completely New GUI

RecoverPoint has changed from an application to a web-based client. From my experience, it isn’t quite as intuitive as the old version. A screenshot of the new GUI is below.

recoverpoint 4 gui - 3

Deeper Integration with VMware Site Recovery Manager

There is now the ability to test or failover to any point in time. This is a huge change, previously with SRM you would only be able to use the latest copy, so the main advantage of RecoverPoint (almost any point in time) wasn’t there when integrated with SRM.

Virtual RPAs

Virtual machines running RecoverPoint software. Sounds like a really good and neat idea, but very limited on functionality. The two biggest limitations – only available when used with iSCSI (hosts can be connected via FC though, but be careful as EMC doesn’t support the same LUN accessed by both FC and iSCSI at the same time) and only available with RP/SE (RP/SE is the VNX only license variant of RecoverPoint). Also the performance of these vRPAs depends on the amount of resources you give them.

Synchronous Replication Over IP

If you have a fast enough IP WAN connection you can now use synchronous mode via IP. The benefit to this is obvious, the exact same data on your production array is on the DR array too. All of the considerations with synchronous replication still exist, the round trip latency added may cause a noticeable performance impact to clients.

Centralized Target

You can now have up to four sites replicating to a single site. This is a huge change as it allows you to minimize the cost and hardware requirements of having multiple sites. Prior to RecoverPoint 4.0 you would’ve needed four different RecoverPoint clusters each with their own set of RPAs to accomplish the same thing now possible.

Multiple Targets

And you can also replicate a single source to up to four targets if you want. I don’t see this quite as impactful as the being able to replicate to a centralized target, but it depends on how many copies of your data you want and how many sites you want to have protected against failure.

recoverpoint 4 - 4

Supported Splitters

Not really a new feature, more of a kick in the pants to anyone that used switched-based splitting (and those that had to learn how to install and support it). Using a switch-based splitter isn’t supported in RecoverPoint 4.0. Your options now are VMAX, VNX/CX, and VPLEX splitters.

Licensing

Not really a new feature either, but very important to know the differences between the versions. If you plan on using the multiple-site replication, you will need to use RecoverPoint/EX or RecoverPoint/CL licensing.

There are some more new features, as well as performance and limitation enhancements, but the above list includes most of the big changes.

EMC VNX2 Review: New Hardware and a Big Change to Software for Next Generation VNX

By | EMC, Storage | No Comments

This review will highlight some of the new changes and features of the next generation VNX.

Overview

The new VNX series—referred to as the VNX2, VNX MCx, and the next generation VNX—comes with a major software update and refreshed hardware. I’ll refer to it as the VNX2.

All the new models (5200, 5400, 5600, 5800, 7600, and 8000) come with an Intel Sandy Bridge processor, more cores, more RAM, and optimization for multiple cores.

Below is a graph of how the different core utilization might look with the VNX and VNX2 models.

The new hardware and software allows the VNX with MCx to achieve up to 1 million IOPs.

image003

Active/Active LUNs

If you choose to use traditional RAID Groups, this new feature will potentially improve performance for those LUNs by servicing IO out both storage processors at the same time. In the end, this improvement in its current state probably won’t mean a lot to many customers, as the focus is on shifting to pools. The exciting part is that they were actually able to make traditional RAID Group LUNs active/active, so maybe we will see pool LUNs be active/active in the future.

image006

FAST, FAST Cache, and Cache

VNX FAST

It works the same as it used to, with the exception of it now works at a 256MB ‘chunk’ level instead of a 1GB ‘chunk’. This allows for greater efficiency of data placement. If for example you had a pool with 1TB of SSD and 30TB of SAS/NLSAS on a VNX you obviously have a very limited amount of SSD space and you want to make the best use of that space. The way the VNX would tier the data would be at a 1GB chunk level so if a fraction of that 1GB, say 200MB was the actual hot data, 824MB would be promoted to SSD unnecessarily. On the VNX2 using the 256MB chunk only 56MB would be promoted unnecessarily – perfect? Obviously not. Better? Yes. If we took this example and multiply it by 10 – you’d have around 8GB of unnecessarily promoted data to SSD on the VNX and only 560MB unnecessarily promoted on the VNX2.

FAST Cache

Major improvements here as well. The warm up time has been improved by changing the behavior that when the capacity of FAST Cache is less than 80% utilized, any read or write will promote the data to FAST Cache. After it is 80% full it returns to the original behavior of being read or written to 3 times prior to promotion.

Cache

Also on the topic of cache, the read and write cache of the storage processors no longer need to have their cache levels set. The cache now adjusts automatically to whatever the array thinks is best for the workload on the array. This is great news: no longer need to mess with high and low water marks or what values the read and write cache should be set at.

image009

Deduplication and Thin LUNs

Another huge change to VNX pools is the ability to do out-of-band block based deduplication.

This sounds great; however, with this comes considerations. First, it only works on thin pool LUNs. EMC’s recommendation for using thin LUNs has always been to not use them for LUNs that require a low response time and produce a lot of IOPs. With the VNX2 performance improvements, the thin LUN performance impact may be less. However, I haven’t seen a comparison between the two to be able to say whether or not it has improved with the new code. Also EMC’s recommendations for using deduplication on block LUNs is only on LUNs with less than 30% writes, non-sequential, and small random IOs (smaller than 32KB).

The other recommendation is that you test it on non-production before enabling it on production. Does that mean you make a copy of your production workload and then simulate your production workload on your new copy? I’d say so, as ideally you’d want an exact duplicate of your production environment. So would you buy enough drives to have a duplicate pool, in order to have the exact percentage of drives to be able to simulate how everything would work? Maybe. Or you could just enable it and hope it works—but that would mean you should have a very good understanding of your workload before enabling it.

However, if you do choose to use deduplication and it doesn’t work out, you can always reduplicate the data and go back to a normal thin LUN. If you want to go back to a thick LUN, you would then need to do a LUN migration to go back to a thick LUN.

Also, when using the GUI to create a LUN in a pool, ‘thin’ is checked by default now. If you’re not careful and you don’t want this, you may end up over provisioning your pool without knowing it. While thin provisioning is not a new feature, enabling it by default is new.

This is not something to take lightly. A lot of people will make LUNs until the free space runs out. With thin LUNs, your free space doesn’t run out until you actually have data on those LUNs, so you can very easily over provision your pool without knowing it. So if you have a 10TB pool, you could very quickly provision 20TB and not know it. It becomes a problem when you’ve used up that 10TB, because your hosts think they have 20TB available. Once the pool becomes full, your hosts still think they can write data even though they can’t. This usually results in the host crashing. So you would need to expand your pool prior to it filling up, which means you need to monitor it closely—but the problem is you might not know you need to, if you don’t know you’re making thin LUNs.

Hot Spares

The way the hot spares in the VNX with MCx works has changed quite a bit. There are no hot spare drives now: you simply don’t provision all the drives, and a blank drive becomes a hot spare. Next, instead of equalizing, the process of the hot spare copying back to the drive that was replaced, like the VNX does, the VNX2 does permanent sparing. When a drive fails, after 5 minutes, the data is copied from the failed/failing drive if possible, otherwise rebuilt using parity to an empty drive. The 5 minute feature is new as well, which allows you to move drives to new enclosures/slots if desired.

Since the drive doesn’t equalize, if you want to have everything contiguous or laid out in a specific manor you would need to manually move the data back to the replaced drive. This is important for example if you have any R10 RAID Groups/pools you don’t want to have it spread across Bus 0 Enclosure 0 and any other enclosures. Also the vault drives work a little different, only the user data is copied to the new drive, so upon vault drive replacement you should definitely move it back manually (if you use the vault drives for data).

image012

Save Time and Increase Accuracy by Using Microsoft Excel to Script Repetitive Tasks

By | How To | No Comments

If you’re like me, when you have a lot to do and not a whole lot of time to do it, saving time on repetitive tasks certainly helps—creating scripts with Excel can do just that.

Aside from writing some old batch files, I don’t really know how to script things that well. Sure, I can take someone else’s script and modify it fairly easy—but making one from scratch is not what I do best.

So what I like to do is use Excel to create some scripts for me. I’ll go through an example of how to use Excel to quickly create some scripts.

Using Microsoft Excel to Create Scripts

Below is the command for a VNX to create a new LUN in a pool. Columns A, C, E, G, I, K don’t change (neither does B in this example, but it does change per VNX system you’re working on), but the rest do, so you can just copy and paste those down the columns. After that, you can put in the data that you want.

The important thing is that in Excel, if you start the cell with a dash, or minus sign, it will change what is in the cell. I use spaces on all the columns that don’t change, and make sure there are no spaces in the columns that do change.

jon 1

After you get all your information into Excel, highlight it all except the first row with the column information. Then, copy and paste it into Notepad.

When you do this, it copies over a tab from Excel that separates each cell—as you can see below, the spacing is way off.

jon 2

The highlighted area is one character (a tab): highlight it, copy it, and then replace it with nothing.

jon 3

jon 4

After you replace all, you will get the format you need and you can just copy and paste that into whatever CLI you need to use. I would suggest you copy and paste the first line only, in order to ensure you have no errors in your syntax.

jon 5

This example was specifically for an EMC VNX, but you can use this method for anything you need to do repetitive tasks for and only a portion of the information changes.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

Advice from the Expert, Best Practices in Utilizing Storage Pools

By | Backup, Cisco, Data Loss Prevention, EMC, How To, Log Management, Networking, Storage, VMware | No Comments

Storage Pools for the CX4 and VNX have been around a while now, but I continue to still see a lot of people doing things that are against best practices. First, let’s start out talking about RAID Groups.

Traditionally to present storage to a host you would create a RAID Group which consisted of up to 16 disks, the most typical used RAID Groups were R1/0, R5, R6, and Hot Spare. After creating your RAID Group you would need to create a LUN on that RAID Group to present to a host.

Let’s say you have 50 600GB 15K disks that you want to create RAID Groups on, you could create (10) R5 4+1 RAID Groups. If you wanted to have (10) 1TB LUNs for your hosts you could create a 1TB LUN on each RAID Group, and then each LUN has the guaranteed performance of 5 15K disks behind it, but at the same time, each LUN has at max the performance of 5 15K disks.
[framed_box bgColor=”#F0F0F0″ textColor=”undefined” rounded=”true”] What if your LUNs require even more performance?

1. Create metaLUNs to keep it easy and effective.

2. Make (10) 102.4GB LUNs on each RAID Group, totaling (100) 102.4GB LUNs for your (10) RAID Groups.

3. Select the meta head from a RAID Group and expand it by striping it with (9) of the other LUNs from other RAID Groups.

4. For each of the other LUNs to expand you would want to select the meta head from a different RAID Group and then expand with the LUNs from the remaining RAID Groups.

5. That would then provide each LUN with the ability to have the performance of (50) 15K drives shared between them.

6. Once you have your LUNs created, you also have the option of turning FAST Cache (if configured) on or off at the LUN level.

Depending on your performance requirement, things can quickly get complicated using traditional RAID Groups.

This is where CX4 and VNX Pools come into play.
[/framed_box] EMC took the typical RAID Group types – R1/0, R5, and R6 and made it so you can use them in Storage Pools. The chart below shows the different options for the Storage Pools. The asterisks notes that the 8+1 option for R5 and the 14+2 option for R6 are only available in the VNX OE 32 release.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inNow on top of that you can have a Homogeneous Storage Pool – a Pool with only like drives, either all Flash, SAS, or NLSAS (SATA on CX4), or a Heterogeneous Storage Pool – a Storage Pool with more than one tier of storage.

If we take our example of having (50) 15K disks using R5 for RAID Groups and we apply them to pools we could just create (1) R5 4+1 Storage Pool with all (50) drives in it. This would then leave us with a Homogeneous Storage Pool, visualized below.High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage in

The chart to the right displays what will happen underneath the Pool as it will create the same structure as the traditional RAID Groups. We would end up with a Pool that contained (10) R5 4+1 RAID Groups underneath that you wouldn’t see, you would only see the (1) Pool with the combined storage of the (50) drives. From there you would create your (10) 1TB LUNs on the pool and it will spread the LUNs across all of the RAID Groups underneath automatically. It does this by creating 1GB chunks and spreading them across the hidden RAID Groups evenly. Also you could turn FAST Cache on or off at the Storage Pool level (if configured).

On top of that, the other advantage to using a Storage Pool is the ability to create a Heterogeneous Storage Pool, which allows you to have multiple tiers where the ‘hot’ data will move up to the faster drives and the ‘cold’ data will move down to the slower drives.

Jon Blog photo 4Another thing that can be done with a Storage Pool is create thin LUNs. The only real advantage of thin LUNs is to be able to over provision the Storage Pool. For example if your Storage Pool has 10TB worth of space available, you could create 30TB worth of LUNs and your hosts would think they have 30TB available to them, when in reality you only have 10TB worth of disks.

The problem with this is when the hosts think they have more space than they really do and when the Storage Pool starts to get full, there is the potential to run out of space and have hosts crash. They may not crash but it’s safer to assume that they will crash or data will become corrupt because when a host tries to write data because it thinks it has space, but really doesn’t, something bad will happen.

In my experience, people typically want to use thin LUNs only for VMware yet will also make the Virtual Machine disk thin as well. There is no real point in doing this. Creating a thin VM on a thin LUN will grant no additional space savings, just additional overhead for performance as there is a performance hit when using thin LUNs.

High Capacity Disks, Storage facility, Storage facilities, Cloud storage, Storage pool, Storage racks, Cheap storage, What is cloud, Computing storage management, Network storage, Rack mount, Storage unit, Vmware, Vmware performance monitoring, Vmware monitoring, Vmware backup, Sto-rage, Storage inAfter the long intro to how Storage Pools work (and it was just a basic introduction, I left out quite a bit and could’ve gone over in detail) we get to the part of what to do and what not to do.

Creating Storage Pools

Choose the correct RAID Type for your tiers. At a high level – R1/0 is for high write intensive applications, R5 is high read, and R6 is typically used on large NLSAS or SATA drives and highly recommended to use on those drive types due to the long rebuild times associated with those drives.

Use the number of drives in the preferred drive count options. This isn’t always the case as there are ways to manipulate how the RAID Groups underneath are created but as a best practice use that number of drives.

Keep in mind the size of your Storage Pool. If you have FAST Cache turned on for a very large Storage Pool and not a lot of FAST Cache, it is possible the FAST Cache will be used very ineffectively and be inefficient.

If there is a disaster, the larger your Storage Pool the more data you can lose. For example, if one of the RAID Groups underneath having a dual drive fault if R5, a triple drive fault in R6, or the right (2) disks in R1/0.

Expanding Storage Pools

Use the number of drives in the preferred drive count options. If it is on a CX4 or a VNX that is pre VNX OE 32, the best practice is to expand by the same number of drives in the tier that you are expanding as the data will not relocate within the same tier. If it is a VNX on at least OE 32, you don’t need to double the size of the pool as the Storage Pool has the ability to relocate data within the same tier of storage, not just up and down tiers.

Be sure to use the same drive speed and size for the tier you are expanding. For example, if you have a Storage Pool with 15K 600GB SAS drives, you don’t want to expand it with 10K 600GB SAS drives as they will be in the same tier and you won’t get consistent performance across that specific tier. This would go for creating Storage Pools as well.

Graphics by EMC

EMC VPLEX

Review: Big Benefits to Using EMC VPLEX Local

By | EMC, Storage, Uncategorized | No Comments

EMC’s VPLEX is a very powerful tool whether it’s deployed in a Local, Metro, or Geo configuration. What everyone always seems to talk about is the benefits of the VPLEX Metro configuration to the data center. To this point, it is a big benefit. You could have an entire data center go offline yet if you’ve deployed VPLEX Metro and mirrored everything, you could have everything continue running on the other data center. It would be possible for no one to even notice that one of the sites went offline.

The below picture shows an overview of what a VPLEX Metro configuration would look like.

EMC VPLEX

From my professional experience, what no one seems to talk about is the benefits of VPLEX Local. Even if you have a single array, say a CX3-40, a VPLEX Local installation will one day help you. The key is to have it installed!

The below picture shows an overview of what a VPLEX Local configuration would look like.

EMC VPLEX

So… why do I like VPLEX Local so much even if you only have a single array? Well, let’s address why it’s not going to help you.

It will NOT provide any additional redundancy to your infrastructure even if you set up everything properly. This is another thing to configure so there is always the chance of setting it up improperly.

[framed_box]

What are the benefits I see from having a VPLEX Local installation?

[/framed_box]
  1. It has a large cache that sits in-between the host and the array.
  2. If you don’t have a DR location currently yet will have one in the future, you have more options on how to get data to the DR site. You can do VPLEX Geo, Metro, or use RecoverPoint with VPLEX.
  3. If you want to mirror your array and have array failover capability within the same datacenter, you already have the VPLEX setup and won’t have to reconfigure anything.
  4. It is a single point to connect all of your hosts as the VPLEX acts as storage to host and acts as a host to storage. If you have more than one array you don’t have to worry about connecting your hosts to different storage array vendors and getting the connection settings correct. You simply have one place to do it all.
  5. One of the biggest reasons (as if the above reasons weren’t enough) is that you never have to take down time for a migration again. If you read this and weren’t excited then you haven’t done as many migrations as I have. They’re long. Planning takes weeks as migrating itself takes weeks or months. You have to have a lot of people involved and downtime is required. Downtime is usually a long process as it is not completed in just one night, but more like four to six hours one night every week for three months!

    Usually the cutovers happen on Friday or Saturday night and nobody wants to do this. Occasionally things don’t go as planned and you don’t get as much done as you anticipated or there was a setback. The setbacks could be related to any systems no working properly or to something like a key employee forgetting they had to attend a wedding that weekend so you have to push off that weeks migration. I’ve seen it all.

Migrations are complicated and they cost a lot of money to hire someone to do it. As much as you trust your employees how often do they do migrations, once every four years? Wouldn’t you rather have the peace of mind paying someone else who does this professionally? You will need to hire someone that does migrations often and they don’t come cheap.

[framed_box]

How does having a VPLEX Local fix this?

[/framed_box]

Let’s assume you already have it installed, running, and your hosts have storage presented from it (as you should). The next step is for you to buy a new array, configure the array then present the new storage to the VPLEX. After this you go to the VPLEX and you mirror everything from the old array to the new array. Once that is done you take away the original leg of the mirror (old array) and you’re done. No down time, hardly any planning and no one has to work late from your company. You also save a ton of money as you don’t have to pay someone else to do it for you.

float(1)