All Posts By

Jeffrey McDaniel

Review: Silver Peak Delivers Quick and Simple WAN Optimization for the Cloud

By | Cloud Computing, Networking | No Comments

In our cloud, it seems like there are two things I can never have too much of … that being compute power and bandwidth. As soon as I stand up a chassis with blades in it, a workload occupies that space. The same is with our bandwidth … as soon as we make more bandwidth available, some form of production or replication traffic scoops it up.

Introduction: Silver Peak

About 6 months ago, I started to look at Silver Peak and their VX software. I have a lot of experience in the WAN optimization area and the first couple of concerns that came to mind were ease of configuration and impact to our production environment in that what type of outages would I have to incur to deploy the solution. I did not want to have to install plug-ins for specific traffic and if I make changes, what does that do to my firewall rules?

After reviewing the product guide, I spoke with the local engineer. The technical pitch of the software seemed fairly easy:

1) Solves Capacity (Bandwidth) by Dedupe and Compression

2) Solves Latency by Acceleration

3) Solves Packet Loss by enabling Network Integrity

The point to Silver Peak is they are an “out of path” solution. I migrate what I want when I want. There are two versions of their optimization solution available. The first is the VX software that offers multi-gigabit using a virtual appliance. The second is the NX appliance, which is a physical appliance that offers up to 5Gbps throughput. Since we are a cloud provider, no stretch to guess that I tried the VX virtual appliance first.

The Deployment

After deciding what we wanted to test first, it was on to the deployment and configuration. Another cloud engineer and I decided to install and configure on our own with no assistance from technical support. I say this because I wanted to get a real sense of how difficult it was to get the virtual appliance up and running.

(By the way, technical support is more than willing to assist and Silver Peak is one of the few vendors I know that will actually give you 24×7 real technical support on their product during the evaluation period—it isn’t a “let’s call the local SE and have him call product support”, but true support if it’s needed.)

After our initial installation, turns out we did need a little support assistance, because we didn’t appear to be getting the dedupe and compression rates that I was expecting. Turns out, after the OVA file is imported to the virtual environment, I should have also set up a Network Memory disk—this is what allows the appliance to cache and provides the dedupe and compression. Since I didn’t have this turned on, the virtual server memory was used … my fault. Even with the support call, we had the virtual appliances installed and configured within 1.5 hours. If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

What We Were Optimizing

Scenario 1

We had two scenarios we were looking at with our evaluations. The first scenario was our own backbone replication infrastructure. We replicate everything from SAN Frames, backup data and archive sets. Our test encompassed two of our datacenters and moving workloads between Chicago and Denver. My first volume was a 32TB NAS volume. Since this solution was “out of path”, I simply added a route on our filers at both ends to send traffic to the local Silver Peak appliance. When Silver Peak is first configured, a UDP tunnel is created between both appliances. Why UDP? Because UDP gets through most firewalls without very complex configurations. When the appliance receives traffic destined for the other end, the packets are optimized and forwarded along.

With this test scenario, we saw up to 8x packet reduction on the wire. That being said, we were able to replicate about 32TB of data between filers in just over 19 hours.

Scenario 2

Our second scenario was replication from a remote customer site using a 100Mbps Internet circuit connecting via a site-to-site VPN. Latency on a good day was 30ms with spikes to 100ms.

In trying to replicate the customers SAN data between their site and ours, stability was a huge issue. Our packet loss on the line was very high. High to the point that replication would stop on some of the larger datasets.

We installed the VX5000 between sites to optimize the WAN traffic. What we found was line utilization went to 100% with packet loss almost to 0%. Our packet reduction was constantly around 4x. While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

With the Silver Peak appliance, we changed out daily replication window from 15+ hours to less than 2 hours.

Conclusion

Surprises in the datacenter are never a good thing. However, being in this space for over 20 years, I have experienced my share of unexpected surprises (not many of them good.) In both our test scenarios, I can honestly say, I was surprised with Silver Peak … and in a good way. I was surprised how easy the installation and configuration was. I was surprised how well their solution worked. I was surprised how good their sales and engineering teams really are.

Photo credit: Emma and Kunley via Flickr

VMware Backup Using Symantec NetBackup: 3 Methods with Best Practices

By | Backup, How To, VMware | No Comments

Symantec’s NetBackup has been in the business of protecting the VMware virtual infrastructures for a while. What we’ve seen over the last couple of versions is the maturing of a product that at this point works very well and offers several methods to back up the infrastructure.

Of course, the Query Builder is the mechanism that is used to create and define what is backed up. The choices can be as simple as servers in this folder, on this host or cluster—or more complex, defined by the business data retention needs.

Below are the high level backup methods with my thoughts around each and merits thereof.
 

1: SAN Transport

To start, the VMware backup host must be a physical host in order to use the SAN transport. All LUNS (FC or iSCSI) that are used as datastores by the ESX clusters must also be masked and zoned (FC) to the VMware backup host.

When the backup process starts, the backup host can read the .vmdk file directly from the datastores using vADP

Advantage

The obvious advantage here is one can take advantage of the SAN fabric thus bypassing all resources from the ESX hosts to backup the virtual environments. Backup throughput from what I’ve experienced is typically greater than backups via Etnernet.

A Second Look

One concern I typically hear from customers specifically with the VMware team is that of presenting the same LUNS that are presented to the ESX cluster to the VMware backup host. There are a few ways to protect the data on these LUNS if this becomes a big concern, but I’ve never experienced any issues with a rogue NBU Admin in all the years I’ve been using this.
 

2: Hot-add Transport

Unlike the SAN Transport a dedicated VMware backup host is not needed to backup the virtual infrastructure. For customers using filers such as NetApp or Isilon and NFS, Host-add is for you.

Advantage

Just like the SAN Transport, this offers protection by backing up the .vmdk’s directly from the datastores. Unlike the SAN Transport, the backup host (media server) can be virtualized saving additional cost on hardware.

A Second Look

While the above does offer some advantages over SAN Transport, the minor drawback is ESX host resources are utilized in this method. There are numerous factors to determine how much if any the impact will be on your ESX farm.
 

3: NBD Transport

The backup method used with NBD is IP based. When the backup host starts a backup process a NFC session is started between the backup host and ESX host. Like Hot-add Transport, the backup host may be virtual.

Advantage

The benefit of this option is it is the easiest to configure and simplistic in concept compared to the other options.

A Second Look

As with everything in life, something easy always has drawbacks. Some of the drawbacks are cost of resources to the ESX host. Resources are definitely used and noticeable the more hosts that are backed up.

With regard to NFC (Network File Copy), there is one NFC session per virtual server backup. If you were backing up 10 virtual servers off of one host, there would be 10 NFC sessions made to the ESX host VMkernel port (management port). While this won’t affect the virtual machine network, if your management network is 1GB, that will be the bottleneck for backups of the virtual infrastructure. Plus VMware limits the number if NFC sessions based upon the hosts transfer buffers, that being 32MB.
 

Wrap-up: Your Choice

While there are 3 options for backing up a virtual infrastructure, once you choose one, you are not limited to sticking with it. To get backups going, one could choose NBD Transport and eventually change to SAN Transport … that’s the power of change.

Photo credit: imuttoo

3 Reasons Why You Need a Cloud Compliance Policy Now

By | Cloud Computing, Security | No Comments

cloud compliance policy blog - header image - smallerWhile the debate is still continuing for most as to what the “Cloud” means, the point that can’t be argued is that cloud models are already here and growing.

Whether one is talking about a fully hosted cloud model for hosting systems, networks and applications at a 3rd party provider, or looking at a hybrid model to address resource overflow or expansion, there are numerous cloud providers offering a myriad of options for one to choose from. The questions posed with these solutions follow the path from security, access, monitoring, compliance and SLAs.

As more departments within organizations look at the potential of cloud offerings, the time is here for organizations to address how to control these new resources—the reasons are no small matter.

Reason 1: Office Automation

Organizations have longed searched for ways to place standard business applications outside the organization. Document collaboration and email seemed to be a perfect fit. However, for multi-national organizations, there’s a hidden dark side.

Some countries do not allow specific types of data to leave the bounds of the country. For example, if you are a UK-based company, or an organization in the US with a UK presence, that means emails and documents containing personal client and employee information may not be replicated outside to the US. I would argue understanding the cloud provider’s model and how they move data is just as important as how they safeguard and offer redundancy within their own infrastructure. If your data is not managed and not secured as specified by the law, you could have more to answer than just the availability of your data.

“Part of our job as a cloud provider is not only to understand our customers’ data needs, but how our model impacts their business and what we can do to align the two,” states Justin Mescher, CTO of IDS.

There is not a set boilerplate of questions to ask for every given scenario. The main driver of the questions should be around the business model of the organization and how the specific needs to protect its’ data compares to what the cloud provider does with the data. If data is replicated, where is it replicated and how is it restored?

Reason 2: Test Development

One of the biggest drivers for cloud initiatives is development and testing of applications. Some developers have found it easier to develop applications in a hosted environment, rather than proceed through change control or specific documentation requesting testing resources and validation planning of applications on the corporate infrastructure.

Companies I have spoken to cite a lack of resources for their test/dev environments as being the main motivation for moving to the cloud. While this sounds like a reasonable solution to push development off to the cloud, what potentially is lacking is a sound test and validation plan to move an application from design to development to test to production.

John Squeo, Director of Strategic IT Innovation & Solutions Development at Vanguard Health Systems states, “If done properly, with the correct controls, the cloud offers us a real opportunity to quickly develop and test applications. Instead of weeks configuring infrastructure, we have cut that down to days.”

John further commented that, “While legacy Healthcare applications don’t port well to the cloud due to work flow and older hardware and OS requirements, most everything else migrates well.”

If the development group is the only group with access to the development data, the organization potentially loses its’ biggest asset … the intellectual property which put it in business in the first place. As stated above, “if done properly”, this includes a detailed life cycle testing plan, defining what the test criteria are, as well as those that have access to test applications and data.

Reason 3: Data Security

Most organizations have spent much time developing policies and procedures around information security. When data is moved off site, the controls around data security, confidentiality and integrity become even more critical.

Justin Mescher, CTO of IDS adds, “While we have our own security measures to protect both our assets, as well as our customers, we work hand in hand with our customers to ensure we have the best security footprint for their needs.”

Financial institutions have followed the “Know your customer, know your vendor” mentality for some time. Understanding the cloud providers’ security model is key to developing a long lasting relationship. This includes understanding and validating the controls they have in place for hiring support staff, how they manage the infrastructure containing your key systems and data, as well as whether or not they can deliver your required reporting. The consequences of not performing appropriate vendor oversight can lead to additional exposure and risk.

Whether your senior management is or is not planning on using the cloud, I guarantee you this: there are departments in your organization that are. The challenge is now in defining an acceptable usage and governance policy. Don’t be left on the outside and surprised one day when someone walks away with your data when you didn’t know it left in the first place.

Photo credit: erdanziehungskraft via Flickr

float(1)