ONTAP 8.3 Update: QOS Commands, Consider Using Them Today

ONTAP 8.3 Update: QOS Commands, Consider Using Them Today

By | Data Center, How To, NetApp | No Comments

NetApp has included some very powerful troubleshooting commands with the 8.3 update which I’d like to bring to your attention: its QOS statistics and its subcommands. Prior to 8.3, we used the dashboard command to view statistics at the cluster node level. The problem with dashboard is that it’s reporting on cluster level statistics and it can be difficult to isolate problems caused by a single object. The advantage of the QOS command is that we now have the ability to target specific objects in a very granular fashion. Read More

Cloud Computing and Farming from NetApp Insight

An Unexpected Finding at NetApp Insight 2014

By | Cloud Computing, NetApp, Strategy | No Comments

Another year of NetApp Insight has come and gone and I would like to share some very exciting news regarding the many useful updates to Data ONTAP 8.3. However, I will have to wait a few more weeks until NetApp lifts the press embargo. Instead, I want to take some time and share with you something I found extremely interesting from NetApp partner, Fujitsu

NetApp has partnerships with many players in tech, but this story presented at one of the general sessions from Fujitsu’s Bob Pryor, Head of Americas and Corporate Vice President, regarding how Japanese farmers are using SaaS in the cloud really had a profound effect on me.  Not only because cloud computing and farming are perceptively juxtaposed, but having grown up on a dairy farm, I’m always interested in how farmers are using technology to drive efficiency in their daily lives. For example, robots and GPS devices did not exist 30 years ago in the agricultural space, right when I needed them most to help me with my chores.

How Can Cloud Computing and Farming Work Together?

Cloud and farming? Nothing could seem so irrelevant on the together on the paper.After all, farmers use sweat and brawn, machinery, and long hours to accomplish their tasks, but they’re also running a business and need to collect data on crops, commodity prices, livestock, and weather.Millions of potential data points for analysis could open up new ways of discovery to higher yields, healthier livestock, and ultimately, greater profits.Kind of sounds like a “Big Data” opportunity to me. I encourage you to take a look at Akisai, Fujitsu’s SaaS platform aiding Japanese farmers today. 

“Fujitsu’s new service is the first of its kind worldwide that has been designed to provide comprehensive support to all aspects of agricultural management, such as for administration, production, and sales in open field cultivation of rice and vegetables, horticulture, and in animal husbandry. With the on-site utilization of ICT as a starting point, the service aims to connect distributors, agricultural regions, and consumers through an enhanced value chain.” – Fujitsu

As more of us move into large cites and as third-world countries continue evolve from an agrarian to manufacturing/services based economies, it’s more important now than ever to understand where our food comes from, how it’s produced, and how it affects us as consumers. If technology can play a more dominant role in supplying food delivery to the world with less land, resources, and time, and can provide better economies of scale to the farmer, then I believe Fujitsu is onto something here.

Please visit Fujitsu’s website for further information.



Adventures In cDOT Migrations: Part Two

By | How To, NetApp | No Comments

Before we start: for those just joining the adventures, here’s Part One.

Part Two: Insights From The Field

When it comes to 7-Mode to cDOT transitions, we are seeing the trend of host-based migration continuing to be King when it comes to databases and virtual environments. However, for those customers who are using SnapMirror where re-seeding those primary and secondary volume relationships is not an option due to WAN limitations, the 7MTT (7-Mode Transition Tool) is becoming the work horse of our transition engagements.

It’s critical going into this process to understand the capabilities and limitations of the tool. Let’s take a look at some of the technical terms around the 7MTT.

  • A Project is a logical container that allows you to setup and manage the transition of a group of volumes.
  • A Subproject contains all of the configuration data around volume transitions, i.e. SVM mapping, volume mapping and SnapMirror schedule.
  • A Transition Peer Relationship is the authorization management mechanism for the SnapMirror relationships are between 7-Mode and cDOT systems.

One of the limitations around the 7MTT is that twenty volumes can be managed inside a project container. There is typically some planning and strategy around grouping volumes together either by use case or RPO/RTO. The look and feel of the transition is very SnapMirror-like: it follows a baseline, and has incremental and cutover format. There is also a CLI, but using the GUI is the recommended approach.

As with any services engagement, due diligence leads to success and these 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months depending on the size of the environment to complete.

“These 7-Mode to cDOT transitions require careful planning and collaboration as they can take weeks to months”

7-Mode Transition Tool 1.2 Data and Configuration Transition Guide For Transitioning to Clustered Data ONTAP® can be found here with a NOW account login.

Important note: You should be aware of the versions of Data ONTAP operating in 7-Mode that are supported for transitioning to clustered Data ONTAP version 8.2.0 or 8.2.1. You can transition volumes from systems running Data ONTAP 7.3.3 and later. For the updated list of Data ONTAP versions supported for transition by the 7-Mode Transition Tool, see the Interoperability Matrix.

Photo credit: thompsonrivers via Flickr

Adventures in cMode Migrations: Part One

By | NetApp, Storage | No Comments

On paper, a 7-Mode to the Clustered Data OnTap (“cDOT”) migration can seem fairly straightforward. In this series, I will discuss some scenarios in terms of what can be very easy vs. what can be extremely difficult. (By “difficult” I’m mostly referring to logistical and replication challenges that arise in large enterprise environments.)

The Easy First!

Tech refresh in one site:

Bob from XYZ corp is refreshing his 7-Mode system and has decided to take advantage of the seamless scalability, non-disruptive operations and the proven efficiencies of 7-Mode by moving to the cDOT platform. Hurray Bob! Your IT director is going to double your bonus this year because of the new uptime standard you’re going to deliver to the business.

Bob doesn’t use Snapmirror today because he only has one site and does NDMP dumps to his tape library via Symantec’s Replication Director. Plus 10 points to Bob. Managing snapshot backups without a catalogue can be tricky. Which Daily.0 do I pick? Yikes! Especially if he gets hit by a bus and the new admin has to restore the CEO’s latest PowerPoint file because Jenny in accounting opened up a strange email from a Nigerian prince asking for financial assistance. Bad move, Jenny! Viruses tank productivity.

Anyway …

Bob’s got a pair of shiny new FAS8040s in a switchless cluster, the pride of the NetApp’s new mid-range fleet. He’s ready to begin the journey that is cDOT. Bob’s running NFS in his VMware environment, running CIFS for his file shares and about 20 iSCSI LUNs for his SQL DBA. Bob also has 10G switching and servers from one of the big OEMs. So no converged network yet, but he’ll get there with next year’s budget with all of the money he’s going to save the business with the lack of downtime this year! Thanks cDOT.


So what’s the plan of attack? After the new system is up and running, from a high level it would look something like this.

1. Analyze the Storage environment

a. Detail volume and LUN sizes (excel spreadsheets work well for this)
b. Lay out a migration schedule
c. Consult the NetApp Interoperability Matrix to check Fiber channel switch, HBA firmware and host operating system compatibility.
d. Build the corresponding volumes on the new cDOT system
e. Install the 7-Mode migration tool on a Windows 2008 host.
f. Using the tool to move all file based volumes.

That wasn’t so hard. Actually on paper, it looks like this scenario may seem somewhat trivial but I can assure you it is this straightforward. Next time, we are going to crank up the difficulty level a bit. We will add in multiple sites, a Solaris (or insert any other esoteric block OS, HPUX anyone?) environment as well as the usual NAS-based subjects.

See you next time for Part Two.

Photo credit: thompsonrivers via Flickr

What Happens When You Poke A Large Bear (NetApp SnapMirror) And An Aggressive Wolf (EMC RecoverPoint)?

By | Backup, Clariion, Data Loss Prevention, Deduplication, Disaster Recovery, EMC, NetApp, Replication, Security, Storage | No Comments

This month I will take an objective look at two competitive data replication technologies – NetApp SnapMirror and EMC RecoverPoint. My intent is not to create a technology war, but I do realize that I am poking a rather large bear and an aggressive wolf with a sharp stick.

A quick review of both technologies:


  • NetApp’s controller based replication technology.
  • Leverages the snapshot technology that is fundamentally part of the WAFL file system.
  • Establishes a baseline image, copies it to a remote (or partner local) filer and then updates it incrementally in a semi-synchronous or asynchronous (scheduled) fashion.


  • EMC’s heterogeneous fabric layer journaled replication technology.
  • Leverages a splitter driver at the array controller, fabric switch, and/or host layer to split writes from a LUN or group of LUNs to a replication appliance cluster.
  • The split writes are written to a journal and then applied to the target volume(s) while preserving write order fidelity.

SnapMirror consistency is based on the volume or qtree being replicated. If the volume contains multiple qtrees or LUNs, those will be replicated in a consistent fashion. In order to get multiple volumes replicated in a consistent fashion, you will need to quiesce the applications or hosts accessing each of the volumes and then take snapshots of all the volumes and then SnapMirror those snapshots. An effective way to automate this process is leveraging SnapManager.

After the initial synchronization SnapMirror targets are accessible as read-only. This provides an effective source volume for backups to disk (SnapVault) or tape. The targets are not read/write accessible though, unless the SnapMirror relationship is broken or FlexClone is leveraged to make a read/write copy of the target. The granularity of the replication and recovery is based off a schedule (standard SnapMirror) or in a semi-synchronous continual replication.

When failing over, the SnapMirror relationship is simply broken and the volume is brought online. This makes DR failover testing and even site-to-site migrations a fairly simple task. I’ve found that many people use this functionality as much for migration as data protection or Disaster Recovery. Failing back to a production site is simply a matter of off-lining the original source, reversing the replication, and then failing it back once complete.

In terms of interface, SnapMirror is traditionally managed through configuration files and the CLI. However, the latest version of ONCommand System Manager includes an intuitive easy to use interface for setting up and managing SnapMirror Connections and relationships.

RecoverPoint is like TIVO® for block storage. It continuously records incoming write changes to individual LUNs or groups of LUNs in a logical container aptly called a consistency group. The writes are tracked by a splitter driver that can exist on the source host, in the fabric switch or on a Clariion (VNX) or Symmetrix (VMAXe only today) array. The host splitter driver enables replication between non-EMC and EMC arrays (Check ESM for latest support notes).

The split write IO with RecoverPoint is sent to a cluster of appliances that package, compress and de-duplicate the data, then sends it over a WAN IP link or local fibre channel link. The target RecoverPoint Appliance then writes the data to the journal. The journaled writes are applied to the target volume as time and system resources permit and are retained as long as there is capacity in the journal volume in order to be able to rewind the LUN(s) in the consistency group to any point in time retained.

In addition to remote replication, RecoverPoint can also replicate to local storage. This option is available as a standalone feature or in conjunction with remote replication.

RecoverPoint has a standalone Java application that can be used to manage all of the configuration and operational features. There is also integration for management of consistency groups by Microsoft Cluster Services and VMWare Site Recovery Manager. For application consistent “snapshots” (RecoverPoint calls them “bookmarks”) EMC Replication Manager or the KVSS command line utilities can be leveraged. Recently a “light” version of the management tool has been integrated into the Clariion/VNX Unisphere management suite.

So, sharpening up the stick … NetApp SnapMirror is a simple to use tool that leverages the strengths of the WAFL architecture to replicate NetApp volumes (file systems) and update them either continuously or on a scheduled basis using the built-in snapshot technology. Recent enhancements to the System Manager have made it much simpler to use, but it is limited to NetApp controllers. It can replicate SAN volumes (iSCSI or FC LUNs) in NetApp environments – as they are essentially single files within a Volume or qtree.

RecoverPoint is a block-based SAN replication tool that splits writes and can recover to any point in time which exists in the journal volume. It is not built into the array, but is a separate appliance that exists in the fabric and leverages array, and fabric or host based splitters. I would make the case that RecoverPoint is a much more sophisticated block-based replication tool that provides a finer level of recoverable granularity, at the expense of being more complicated.

 Photo Credit: madcowk

Interview: Net App Storage Powers New Business for Chicago-based Web Services Provider (Video)

By | Backup, NetApp, Replication, Storage | No Comments

Earlier this month, our Storage Practice Manager, Shawn Wagner, sat down with Karl Zimmerman, President and Founder of Steadfast Networks, to discuss the NetApp storage and backup solution we recently designed and deployed for their business’s datacenter in Chicago.

With the cameras rolling, their conversation addressed the reasons for deciding on NetApp, relative to other manufacturer’s technologies or developing a in-house solution, and how that decision and the NetApp technology is enabling Steadfast to drive new business and offer expanded solutions to their existing customers.


What was the primary business driver for you to move forward with the NetApp technology?

Well, we’ve been looking to build more of a sort of cloud solution—we don’t necessarily like to call it “cloud,” but that’s basically the solution. And we needed to back that up with a very redundant storage solution that we knew we could trust. We generally like building things in house; we’re very hands on ourselves. But in house, we couldn’t really build a solution that developed the performance and reliability that we were able to get with NetApp, and NetApp was able to do it for us at an affordable price.

With respect to IDS in general, how do you see our partnership moving forward?

With IDS specifically, we appreciate that it’s someone who’s always there we can contact. There are always people we can get in touch if we need help on certain aspects of the NetApp infrastructure that we don’t understand—we know that support is there. If we run into problems, which we have already, you guys are there to help us through those. It’s nice just knowing we have the strong, reliable partner that we can depend on.

I’d like to end on just a couple of forward thinking questions, one of which is where do you see your business going over the next 18 to 24 months now having made this investment in NetApp?

We’ve seen a higher demand in general for cloud applications where people are moving things away from their own internal datacenters or their own in-office solutions, and moving them to a datacenter environment—so, of course, it could be accessed across their entire company, or have the additional ability of scale. So this NetApp solution allows us a lot to focus on that segment and we can then offer the scalability of a virtualized environment, while having the cost savings and additional processing power you can achieve with dedicated systems and collocation as well. With NetApp, we now serve a wide variety of needs across a broad spectrum of customers at basically whatever price point or scalability they need. And that of course is then all backed up with a NetApp SAN for the storage side—it’s a common storage system for all these systems. It lets us easily scale and expand for whatever the customer might need.

And one final question: in respect to business continuity and disaster recovery, what possibilities have opened up for you and your clients with NetApp?

NetApp is certainly helpful with the snapshots and also being able to replicate the data to multiple sites; it makes it a lot easier, because we do have multiple data centers. Being able to replicate that data and back it up easily with the snapshots helps us to easily integrate customer redundancy and disaster recovery plans into our existing infrastructure—it’s certainly something that the home-built solutions we were looking at doing didn’t allow us to provide so easily. And it’s certainly one of the reasons along with the reliability and the speed of the NetApp solution that led us to that decision.

“You’re Fired!” Why Snapshots + Replication (Donald) Trump Your Old Backup Strategy

By | Backup, NetApp, Replication, Storage | No Comments

Lately, the question on everyone’s mind has been: is it possible to replace your aging backup strategy with array-based snapshot and replication technology? The inevitable follow-up to that question: why is this so hard to swallow by so many of us, and why do we have a hard time accepting it? It all boils down to what we are used to and what we are comfortable with doing. Change is hard to accept and even harder to implement. I’m hoping with further explanation, I can highlight the benefits of moving away from antiquated backup technology.

First let’s delve into the traditional backup strategy:

>> Incrementals
>> Differentials
>> Nightly’s
>> Weekly’s
>> Monthly’s
>> Auto-loaders
>> Offsite
>> Retention-period

These are all terminologies we are familiar with using on a daily basis. Traditional backups are a huge pain in the $#%, but they have to be done because the business dictates it. The gist of it is this: traditional backup strategies have been around since the 90s. They cut into production hours; they take dedicated server and backup hardware; and we are lucky if the backups actually get done most of the time. Lastly, let’s be honest, when it comes time to do a restore, our fingers are crossed and we hit the RESTORE button with a hope and a prayer that things will actually work.

The industry’s fear of snapshot technology boils down to a few reasons:
[framed_box bgColor=”EFEFEF” rounded=”true”]

  • Snapshots don’t protect against drive failures.
  • Snapshots cannot be moved offsite, or offloaded onto physical media.
  • Data that is not stored centrally on my array is suspect to loss if it’s not getting backed up.
  • Too many snapshots will alter the performance of the array.
  • Snapshots are not integrated into my applications.
  • Snapshots take up too much disk space.
[/framed_box] Now, let me break down these fears and sway you towards snapshots…

Snapshots don’t protect against drive failures.
This depends on two things: 1) how your LUN’s or aggregates are carved out, and 2) you are not replicating your snapshots to a secondary array. The easiest way to overcome the drive failure is to use a RAID technology which supports more than a single drive failure at any given time.

Snapshots cannot be moved offsite, or offloaded onto physical media.
Replicating your data to a secondary array can kill two birds with one stone. It can help protect you against drive failures or total disasters on your primary array. The second bird is that certain manufacturers support NDMP, or Network Data Management Protocol, which is basically an open standard which supports offloading centrally attached storage devices directly to tape. Now why would I bring that up when I am trying to get you away from tape, well, there still is a true business case for it which your organization might not be able to get away from for long term retention.

What about my data that is not stored centrally on my array? Isn’t it suspect to loss if it is not getting backed up?
Two things to help you here, MS VSS and OSSV. For this discussion, I will spew forth about OSSV. This is a technology developed by NetApp used to offload data from a server with locally attached storage and allow snapshots to be taken at the array level. These snapshots can then be used rebuild servers, and even aid in bare metal restores if 3rd party agents are used.

Too many snapshots will alter the performance of the array.
You know, I can’t deny this point, but it also depends on the manufacturer. This boils down to the file system on the array and how snapshots are written. If snapshots are done via copy-on-write technology, the more snapshots you take and keep online, the more performance drops. We have seen up to 60% performance degradation in the field on manufacturers using copy-on-write technology. Array’s that use WAFL and a pointer based snapshot technology will see a very slight performance degradation using snapshots and only when snapshots are being saved into the hundreds.

Snapshots are not integrated into my applications.
Again, another point I cannot deny for the majority of array manufacturers. Usually to get high snapshot based RPO or RTO, you need some kind of appliance connected to the array that is specific to that application. High-five to NetApp here for their snapshot-based application integration in SQL, Oracle, Exchange and virtual environments. With a simple licensed add-on, snapshots can be used for even the most demanding RPO and RTO requirements for an organization.

Snapshots take up too much disk space.
Once again, this depends on manufacturer. If it is a copy-on-write technology, then yes, snap-shotting your array will take up a considerable amount of disk space and your snapshots kept online are severely limited. A pointer based snapshot solution will allow you to keep a tremendous amount of snapshots online at any time, while consecutively consuming very little space on your array. Think about being able to keep a years worth of snapshots online for a 10tb dataset, and only using 2Tb of space to save those snapshots.

I hope I’ve helped you to understand how snapshots can be used to replace aging tape and backup environments. Please feel free to drop a comment below if you would like to dive deeper into how snapshot-based technology can help you in your fight to replace traditional tape backup.

Photo Credit: daveoleary

Storage Industry Update: The Consolidation Trend (Best-of-breed vs. Single-stack)

By | EMC, NetApp, Storage | No Comments

The storage industry has seen considerable consolidation lately. It started a couple years ago with HP acquiring LeftHand and Dell acquiring EqualLogic. More recently, EMC acquired DataDomain and Isilon, HP acquired 3PAR, and as of this week Dell has acquired Compellent. This latest move had been rumored for sometime after Dell failed in its attempt to acquire 3PAR.

Dell originally put out the bid for 3PAR, obviously looking for a large-enterprise storage solution that it could offer in-house. Dell for years had re-branded EMC Clariion storage arrays under its own moniker, but that agreement never expanded into the large-enterprise array space, to include the ubiquitous EMC Symmetrix. Symmetrix has long been known to be the market leader in the enterprise space, and with the introduction of the VMAX line, now has native scale-out functionality. In the past, enterprise arrays primarily were scale-up oriented. A tremendous amount of focus has come upon scale-out models within the past 12-18 months thanks to the proliferation of cloud strategies. Due to the enormous scale that cloud infrastructures must be able to grow into, traditional scale-up boxes based on proprietary architectures were simply too costly. Using commodity Intel components with scale-out architecture allows customers and/or service provides to achieve significant scale at a lower cost.

The recent behavior by multiple manufacturers shows that they are feeling the pressure to boost their product portfolios with regard to scale-out storage. It’s also clear that many manufacturers are trying to create a product suite so they can try to “own” the complete computing stack. IBM has been in the position for quite some time. Oracle acquired Sun to enter the hardware business. HP decided to outbid Dell for 3PAR because it needed scale-out storage. HP’s only true in-house storage product was the EVA, and LeftHand is an SMB solution that can’t scale to meet enterprise needs. In the enterprise space, HP had been OEM’ing an array from Hitachi called the USP. The USP is a monolithic scale-up array that didn’t offer scale-out capabilities. Hence, HP needed 3PAR to create a scale-out enterprise storage array, which most likely will lead to the termination of their OEM agreement with Hitachi.

The HP-3PAR acquisition left Dell as the odd-man out amongst the major players. With 3PAR off the market, Compellent was the most logical choice left. The interesting thing here is most folks would not recognize Compellent as a large enterprise class of array. Today, it is software that runs on whitebox servers. Dell must see something in the Compellent code that leads them to believe it can be reconstructed in a scale-out fashion. This is not going to be a trivial task, and one has to wonder if such a large conglomerate as Dell can truly pull it off, even if they let Compellent operate semi-independent.

What does all this mean for you, the end-user? Personally, I feel that this consolidation is ultimately bad for innovation. The theory is pretty simple, when you try to be a jack of all trades, you end up being a master of none. We see this in practice already. IBM has historically had product offerings in all major infrastructure areas except networking, but few are recognized as being truly market leading. Servers have been one area where IBM does shine. Their storage arrays are typically generations behind the competition. HP has also been known to manufacturer really great servers, and now they are getting some serious consideration in the networking space. However, HP storage has been in disarray for quite some time. There has been a serious lack of product focus, the EVA in particular is very outdated and uncompetitive, and there is no in-house intellectual property in the high-end storage space. Dell has been known to make great servers as well, but didn’t really have any other offerings of their own for enterprise data centers. In the end, all of these conglomerates tend to do really well in one area while being mediocre when it comes to the rest, storage being one of the mediocre areas. This is proven in all the recent market share reports that show these companies have been losing storage market share to companies like EMC and NetApp.

So why are EMC and NetApp so successful right now? I believe it’s because of their singular focus on storage, which helps them have the most innovative products on the market that offer the highest quality and reliability as well. EMC’s strategy is a bit more holistic around the realm of information infrastructure than NetApp’s, but it is still highly focused nonetheless compared to an HP or IBM. Without a doubt, this is why they continue to lead with best-of-breed products year after year, and continue to retain their market leader status. This also bodes well for the VMware-Cisco-EMC (VCE) and VMware-Cisco-NetApp (VCN) strategies. Rather than one company trying to be a jack of all trades, you have market leaders in each specific category coming together to create strategic partnerships with each other. The best-of-breed products can be combined into a stack, with the strategic partnerships allowing for options like one phone call for support, and extensive integration testing between components. It provides the benefits of single-source stack together with the benefits of a best-of-breed approach, which essentially is giving you the best of both worlds!

This best-of-breed approach with a single-point-of-contact is also how IDS operates. We’ve chosen to focus exclusively on EMC and NetApp because we’ve surveyed the marketplace and determined from our years of experience in storage that these are the best two offerings out there for the vast majority of companies. Similar to EMC and NetApp, the proof of our success can be seen in our results as one of the Top 5000 fastest growing companies in the US, and Top 10 fastest growing businesses in Chicago. Just as importantly, our customers consistently rank us as one of the top solution providers in the country on customer service surveys. Rather than try to be a mail-order VAR offering every product under the sun with no deep expertise in any one area, we have decided to focus on helping companies deal with their most important asset, their information. This encompasses storage, backup, and security of that information. What this allows us to do is provide deep consulting expertise to your business, before the sale and after the sale, providing you with a level of service that is simply unmatched compared to traditional VAR’s.