Fun With DART’s & Replication Between A New VNXe And A Celerra NS-120

By November 4, 2011EMC, How To, Replication

A couple of weeks ago I had some fun configuring a new VNXe to replicate with a Celerra NS-120. Here is a transcript of how I got it to work and some of the oddities I encountered:

1. I started out by checking the ESM whose configuration is supported according to the Support Matrix, as long as the Celerra is running DART 6.0. I then upgraded the NS20 (CX3-10 back-end running latest [and last] R26 code) to 6.0.41.

2. Moving along, I set up the interconnects using the “Unisphere” Hosts Replication Connection wizard. I validated the interconnects using nas_cel -interconnect -list on the NS20.

3. I had some initial issues with routing that were quickly resolved and the interconnects looked good to go.

4. This is where it gets dicey: I started out using the wizard on the NS20 Unisphere to replicate a filesystem. Apparently, the NS20 can’t set up replication destination storage and doesn’t seem to be able to enumerate/read the remote filesystem names.

I was able to see a list of remote filesystem IDs though, so this started me thinking : what if I could login to the remote “Celerra” (read, DART instance on the VNXe) to decode which filesystem correlated to which ID, i.e. run nas_fs -list ?

I tried SSH’ing to the VNXe and saw that it was shut down, so I started poking around in the service options and realized that I could enable SSH. I did that – SSH’ed to the box and logged in as “service”, because admin didn’t work. From there, I SU’d to nasadmin and was prompted for the root password. I tried nasadmin, the service password and a couple other passwords I knew of, but it timed out after three tries. However, I was in a nasadmin context so I ran the nas_fs -list command and got precisely what I was looking for – the list of filesystem ID’s to filesystem names.

5. Time services – for replication to work, you have to be within ten minutes (preferably five) of the respective datamovers. I thought I would proactively double-check and set the NTP on the VNXe “server_2” – however I was shut down, because that requires root permissions. Luckily time was pretty close so I was good there (NOTICE: the datamover was set to UTC – probably by design, but required conversion to local time).

6. By this time I realized that using the Celerra Manager/Unisphere/Wizards were not likely to work, so I logged on to the NS20 and ran the nas_replicate -create FS_Name -source -fs -id=552 -destination -fs id=30 -interconnect id=20003 -max_time_out_of_sync 10 command which erred out after a few minutes.

I did some digging on iView and found the primus article emc263903, which referenced logging in as root to run the command. Ok, I have the NS20 root password, so I did that and got the error message “destination filesystem not read-only”. I had created the “Shared Folder” (otherwise known to us old timers as a “File System”) as a replication target – don’t you think that if you are creating a replication target that the wizard would mount it as read-only?

7. Ok, back on the VNXe through SSH as nas admin: I run server_unmount and am prompted for the root password again, it errors out three times and then check – it’s unmounted! I run server_mount with -o ro, get prompted for the root password and error out three more times.

8. Back on the NS20, I re-run the nas_replicate command and it errors, this time with a “destination not empty” message. I used the -overwrite option, because when I provisioned the destination filesystem the minimum size that the wizard presented for the save was the same size as the destination filesystem …

Finally success: the filesystem is replicating!

Photo Credit: calypso_dayz