Since its release last week, there has been a lot of buzz around the Avamar 6.0. I am going to take the liberty of reading between the lines and exploring some of the good (but gory) details.
The biggest news in this release is the DDBoost/Data Domain integration into the new Avamar Client binaries. This allows an Avamar client to send a backup dataset stream to a DataDomain system as opposed to an Avamar node or grid. Datasets that are not “dedupe friendly”(too large for Avamar to handle, or have very high change rates) are typically retained for shorter periods of time. These can be targeted to a DD array, but still managed through the same policies, backup and recovery interface.
Client types supported pertaining to this release are limited to Exchange VSS, SQL, SharePoint, Oracle and VMWare Image backups. Replication of data is Avamar to Avamar and DataDomain to DataDomain: there isn’t any mixing or cross-replication. Avamar coordinates the replication and replicates the meta-data so that it is manageable and recoverable from either side. From a licensing perspective, Avamar requires a capacity license for the Data Domain system at a significantly reduced cost per TB. DDBoost and replication licenses are also required on the Data Domain.
There is a major shift in hardware for Avamar 6.0:
- The Gen4 Hardware platform was introduced with a significant increase in storage capacity.
- The largest nodes now support 7.8TB per node – enabling grids of up to 124TB.
- The new high capacity nodes are based off of the Dell R510 hardware with 12 2TB SATA drives.
- To speed up indexing the new 7.8TB nodes also leverage an SSD drive for the hash tables.
- There are also 1.3TB, 2.6TB and 3.9TB Gen4 Nodes based off of the Dell R710 hardware.
- All nodes use RAID1 pairs and it seems the performance hit going to RAID5 on the 3.3TB Gen3 nodes was too high.
- All Gen4 nodes now run SLES (SUSE Linux) for improved security.
There were several enhancements made for grid environments. Multi-node systems now leverage the ADS switches exclusively for a separate internal network that allows the grid nodes to communicate in the event of front-end network issues. There are both HA and Non-HA front end network configurations, depending on availability requirements. In terms of grid support, it appears that the non-RAIN 1X2 is no longer a supported configuration with Gen4 nodes. Also, spare nodes are now optional for Gen4 grids if you have Premium Support.
Avamar 6.0 is supported on Gen3 hardware, so existing customers can upgrade from 4.x and 5.x versions. Gen3 hardware will also remain available for upgrades to existing grids as the mixing of Gen3 and Gen4 systems in a grid is not supported. Gen3 systems will continue to run on Red Hat (RHEL 4).
Avamar 5.x introduced VStorageAPI integration for VMWare ESX 4.0 and later versions. This functionality provides changed block tracking for backup operations, but not for restores. Avamar 6.0 now provides for in-place “Rollback” restores leveraging this same technology. This reduces restore times dramatically by only restoring the blocks that changed back into an existing vm. The other key VMWare feature introduced in version 6.0 is Proxy server pooling – previously, a proxy was assigned to a datastore, but now proxy servers can be pooled for load balancing in large environments.
There were several additional client enhancements on the Microsoft front including Granular Level Recovery (GLR) support and multistreaming (1 to 6 concurrent streams) for Exchange and Sharepoint clients.
All in all, the Avamar 6.0 release provides several key new features and scales significantly further than previous versions. With the addition of Data Domain as a target, tape-less backup is quickly approaching reality.
Photo Credit: altemark