Category

Networking

Daas (Desktop as a Service)

The DaaS Revolution: It’s Time

By | Cloud Computing, Daas, Desktop Virtualization, Disaster Recovery, Infrastructure, Networking, Security, Uncategorized, Virtualization | No Comments

When terminal services was first released, it revolutionized the way users accessed their applications and data. Employees could tap into their digital resources from essentially anywhere while businesses could be certain that all data was located in the data center and not on end user devices. But no revolution is perfect and the challenge with terminal services (renamed to RDSH in recent years) was that it did not give users the customization and experience of their local devices.  Read More

Smart Monitoring & Desktop Virtualization Visibility

From Smart Monitoring to Happy End-users

By | Cloud Computing, Data Center, Monitoring, Networking, Reporting, Review, Virtualization | No Comments

Desktop Virtualization Visibility & Peace of Mind

To monitor or not to monitor? That has become the question for many businesses today as they design their virtual desktop environments. How are they answering? In my recent experience, I have noticed many businesses choosing either to implement a badly put together monitoring solution or forego virtual desktop monitoring altogether. Those are two risky options in a virtual environment where end-user experience (EUX) is of utmost importance and monitoring can be essential to its success. Read More

IDS announces three new hires to expand engineering team

New IDS Hires Announcement: Welcome to the team!

By | Employee News, IDS, Networking, Security | No Comments

We’re incredibly excited to announce the addition of three new hires at IDS who will be instrumental in growing our business around transformational technologies. The strategic expansion will further fuel the growth of the IDS consulting practice by increasing focus surrounding big data, analytics, security and software-defined networking. We have experienced incredible success over the last few years, and this latest addition is one of many that will continue to contribute to the ongoing IDS transformation. Read More

Trending cDOT Transition Considerations

Trending cDOT Transition Considerations: What You Need to Know

By | Networking, Storage, Virtualization | No Comments

When considering a transition from current 7-Mode systems to Clustered Data ONTAP (cDOT), it’s important to understand the limitations, timing and complexity. At IDS, we help our customers navigate and understand how this process impacts their production environment. We understand every customer’s architecture is different, but we have compiled some questions that continue to trend in our conversations. Read More

WAN vs. WAN Optimization

By | How To, Networking, Strategy | No Comments

Last week, I compared Sneakernet vs. WAN. And I didn’t really compare the two with any WAN optimization products—just a conservative compression ratio of around 2x, which can be had with any run-of-the-mill storage replication technology or something as simple as WinZip.

But today, I want to show the benefits of putting a nice piece of technology in between the two locations over the WAN to see how much better our data transfer becomes.

When WAN Opt Is Useful

When choosing between a person’s time or using technology, I like the tech route. But even if it’s faster, how much faster does it need to be to offset the expense, hassle, and opportunity cost of installing a WAN Opt product? The only true way to know is to buy the product, install it, and run your real-world tests; however, I’m one for asking around.

But even if it’s faster, how much faster does it need to be to offset the expense, hassle, and opportunity cost of installing a WAN Opt product?

I reached out to my friends over at Silver Peak, and they pointed me to this handy online calculator.

It turns out, WAN Optimization products aren’t always useful in some situations. If you have ample bandwidth that’s very low latency, it might not be worth it. But even marginal latency across any distance at all, or data that can be repetitive (or compresses/deduplicates well), can benefit from a WAN Opt. And if you have business RPO and RTOs, you may very well require WAN Optimization in between.

An Example

I took the example from last week: the 100mbit connection, figuring in 7ms of latency to simulate the equivalent of 50% utilization on the line with 2x compression. If you recall, the file transfer of 10TB of data moved in 10 days can translate to 370TB of data in the same time frame with a Silver-Peak appliance at both ends. Much of that efficiency is due to the way WAN Optimization works, which is to say that data doesn’t always just get compressed and streamed using multiple steams. The best WAN Opt products also don’t send duplicate and redundant data. So a transfer that would normally take a week or a day could be completed in as little as 4.5 hours or 40 minutes, respectively.

The effort to install, in reality, is not that significant. Silver-Peak appliances come in physical and virtual form, with the virtual machines being a lot quicker to spin up and a little cheaper to acquire. Just make sure you are on a relatively recent IOS code that supports WCCP on your routers, and you can quickly deploy the virtual appliance in both locations.

Additional Benefits

Aside from moving data quickly, there are other benefits, such as improved voice calls (UDP packets that arrive out of order can be reassembled in the correct order), faster response times on applications over the wire, and pretty much any type of traffic that’s TCP-IP. If it were me, I would simply compare the cost of expanding the performance of the circuit versus adding a WAN Opt product in between. For most locations in the United States, circuits are expensive and bandwidth is limited, so you’re likely better off with a Silver-Peak at both ends to save both time and cost.

If it were me, I would simply compare the cost of expanding the performance of the circuit versus adding a WAN Opt product in between.

Of course, don’t just take my word for it. Run a POC on any network that you’re having problems with, and you’ll find out soon enough if WAN Optimization is the way to go.

Photo credit via Flickr: Tom Raftery

Sneakernet vs. WAN: When Moving Data With Your Feet Beats Using The Network

By | Disaster Recovery, Networking, Strategy | No Comments

Andrew S. Tanenbaum was quoted in 1981 as saying “Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway.”

The underlying story on this is written in the non-fiction section of Wikipedia. It was derived from NASA and their Deep Space network tracking station between Goldstone, CA and their other location at Jet Propulsion Labs about 180 miles away. Common today, as much as it were 30 years ago, a backhoe took out the 2400bps circuit between the two locations. The estimate to fix it was about one full day. So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

So, they loaded a car with 9-track magnetic tapes and drove it 3-4 hours from one location to the other to get the data there six times faster than over the wire.

That got me to thinking about IT and business projects that require pre-staging data. Normally, we IT folks get wind of a project weeks or months in advance. With such ample notice, how much data can we pre-stage in that amount of time?

With a simple 100Mbit connection between locations, and using a conservative compression ratio, we can move nearly 1TB of data in a day. That seems plenty of time to move source installation files, ISOs, and even large databases. Remembering that our most precious resource is time, anything a script or computer can do instead of us manually doing is worth a careful consideration.

Below is a chart listing out common bandwidth options and the time to complete a data transfer.

chart - 1

The above example is not as much about data center RPOs and RTOs, as it is about just moving data from one location to another. For DR objectives, we need to size our circuit so that we never fall below the minimums during critical times.

For example, if we have two data center locations with a circuit in between, and our daily change rate of 100TB of data is 3%, we will still need to find the peak data change rate timeframe before we can size the circuit properly.

chart - 2

If 50% of the data change rate occurs from 9am to 3pm, then we need a circuit that can sustain 250GB per hour. A dedicated gigabit circuit can handle this traffic, but only if it’s a low latency connection (the location are relatively close to one another). If there’s latency, we will most certainly need a WAN optimization product in between. But in the event of a full re-sync of data, it would take 9-10 days to move all that data over the wire plus the daily change rate. So unless we have RPOs and RTOs measuring weeks, or unless we have weeks to ramp-up to a DR project, we will have a tough time during a full re-sync, and wouldn’t be able to rely on DR during this time.

So, that might be a case where it makes sense to sneakernet the data from one location to the other.

Photo credits via Flickr: Nora Kuby

Review: Silver Peak Delivers Quick and Simple WAN Optimization for the Cloud

By | Cloud Computing, Networking | No Comments

In our cloud, it seems like there are two things I can never have too much of … that being compute power and bandwidth. As soon as I stand up a chassis with blades in it, a workload occupies that space. The same is with our bandwidth … as soon as we make more bandwidth available, some form of production or replication traffic scoops it up.

Introduction: Silver Peak

About 6 months ago, I started to look at Silver Peak and their VX software. I have a lot of experience in the WAN optimization area and the first couple of concerns that came to mind were ease of configuration and impact to our production environment in that what type of outages would I have to incur to deploy the solution. I did not want to have to install plug-ins for specific traffic and if I make changes, what does that do to my firewall rules?

After reviewing the product guide, I spoke with the local engineer. The technical pitch of the software seemed fairly easy:

1) Solves Capacity (Bandwidth) by Dedupe and Compression

2) Solves Latency by Acceleration

3) Solves Packet Loss by enabling Network Integrity

The point to Silver Peak is they are an “out of path” solution. I migrate what I want when I want. There are two versions of their optimization solution available. The first is the VX software that offers multi-gigabit using a virtual appliance. The second is the NX appliance, which is a physical appliance that offers up to 5Gbps throughput. Since we are a cloud provider, no stretch to guess that I tried the VX virtual appliance first.

The Deployment

After deciding what we wanted to test first, it was on to the deployment and configuration. Another cloud engineer and I decided to install and configure on our own with no assistance from technical support. I say this because I wanted to get a real sense of how difficult it was to get the virtual appliance up and running.

(By the way, technical support is more than willing to assist and Silver Peak is one of the few vendors I know that will actually give you 24×7 real technical support on their product during the evaluation period—it isn’t a “let’s call the local SE and have him call product support”, but true support if it’s needed.)

After our initial installation, turns out we did need a little support assistance, because we didn’t appear to be getting the dedupe and compression rates that I was expecting. Turns out, after the OVA file is imported to the virtual environment, I should have also set up a Network Memory disk—this is what allows the appliance to cache and provides the dedupe and compression. Since I didn’t have this turned on, the virtual server memory was used … my fault. Even with the support call, we had the virtual appliances installed and configured within 1.5 hours. If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

What We Were Optimizing

Scenario 1

We had two scenarios we were looking at with our evaluations. The first scenario was our own backbone replication infrastructure. We replicate everything from SAN Frames, backup data and archive sets. Our test encompassed two of our datacenters and moving workloads between Chicago and Denver. My first volume was a 32TB NAS volume. Since this solution was “out of path”, I simply added a route on our filers at both ends to send traffic to the local Silver Peak appliance. When Silver Peak is first configured, a UDP tunnel is created between both appliances. Why UDP? Because UDP gets through most firewalls without very complex configurations. When the appliance receives traffic destined for the other end, the packets are optimized and forwarded along.

With this test scenario, we saw up to 8x packet reduction on the wire. That being said, we were able to replicate about 32TB of data between filers in just over 19 hours.

Scenario 2

Our second scenario was replication from a remote customer site using a 100Mbps Internet circuit connecting via a site-to-site VPN. Latency on a good day was 30ms with spikes to 100ms.

In trying to replicate the customers SAN data between their site and ours, stability was a huge issue. Our packet loss on the line was very high. High to the point that replication would stop on some of the larger datasets.

We installed the VX5000 between sites to optimize the WAN traffic. What we found was line utilization went to 100% with packet loss almost to 0%. Our packet reduction was constantly around 4x. While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

With the Silver Peak appliance, we changed out daily replication window from 15+ hours to less than 2 hours.

Conclusion

Surprises in the datacenter are never a good thing. However, being in this space for over 20 years, I have experienced my share of unexpected surprises (not many of them good.) In both our test scenarios, I can honestly say, I was surprised with Silver Peak … and in a good way. I was surprised how easy the installation and configuration was. I was surprised how well their solution worked. I was surprised how good their sales and engineering teams really are.

Photo credit: Emma and Kunley via Flickr

Cloud Services, Cloud computing, IaaS, IDS, Integrated Data Storage

Driving Change Through IaaS

By | Cloud Computing, Networking | No Comments

Cloud Services, Cloud computing, IaaS, IDS, Integrated Data Storage

Infrastructures as a Service (IaaS) is transforming the way businesses approach their computing environment. IT executives are improving cost efficiency, improving the quality of service and ultimately providing business agility by embracing this paradigm shift in computing.

On the road to IaaS, the ability to partner with a provider that can customize a hosted Private Cloud is key to providing the flexibility that allows IT decision makers to leverage the consumption-based costs models of the Cloud. With this flexibility it is key that the security, governance, and performance are maintained along with the customization that businesses require. The key to this is that there are multiple options to choose from.

Whether if organizations are looking for a turn-key solution that is fully managed or an Infrastructure where they have full control, IaaS allows for customization and flexibility.

The Cloud Infrastructure Service from IDS brings Enterprise-class, best-of-breed architecture to the Cloud. It is an ideal solution for companies that want to leverage cloud computing but need reliable, customizable solutions built by a business partner that understands their needs.

Some of the key benefits from utilizing the IDS Cloud Infrastructure Service include:

1. The IDS Cloud Infrastructure is scalable, expanding and retracting as you need it.
2. Demand for new space can be met in minutes instead of weeks or months.
3. Pay for only the space you need instead of investing in Infrastructure you may never use.
4. All IDS Cloud systems are guaranteed and proven to be as much as 99.999% reliable.
5. IDS Cloud can meet key governance standards and regulations on a client by client basis.
6. Data will be protected by 24×7 surveillance, man traps, biometric readers, key cards and multifactor authentication.

IDS Cloud Infrastructure Service is a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are offered to customers on-demand.

Photo by Hugh Llewelyn

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go-To Guide For IT Optimization & Cloud Readiness, Part II

By | Cloud Computing, How To, Networking, Storage, Virtualization | No Comments

[Note: This post is the second in a series about the maturity curve of IT as it moves toward cloud readiness. Read the first post here about standardizing and virtualizing.]

I’ve met with many clients over the last several months that have reaped the rewards of standardizing and virtualizing their data center infrastructure. Footprints have shrunk from rows to racks. Power and cooling costs have been significantly reduced, while increasing capacity, uptime and availability.

Organizations that made these improvements made concerted efforts to standardize, as this is the first step toward IT optimization. It’s far easier to provision VMs, manage storage, and network from a single platform and the hypervisor is an awesome tool that creates the ability to do more with less hardware.

So now that you are standardized and highly virtualized, what’s next?

My thought on the topic is that after you’ve virtualized your Tier 1 applications like e-mail, ERP, and databases, the next step is to work toward building out a converged infrastructure. Much like cloud, convergence is a hyped technology term that means something different to every person who talks about it.

So to me, a converged infrastructure is defined as a technology system where compute, storage, and network resources are provisioned and managed as a single entity.

it optimized blog pyramid

Sounds obvious and easy, right?! Well, there are real benefits that can be gained; yet, there are also some issues to be aware of. The benefits I see companies achieving include:

→ Reducing time to market to deploy new applications

  • Improves business unit satisfaction with IT, with the department now proactively serving the business’s leaders, instead of reacting to their needs
  • IT is seen as improving organizational profitability

→ Increased agility to handle mergers, acquisitions, and divestitures

  • Adding capacity for growth can be done in a scalable, modular fashion within the framework of a converged infrastructure
  • When workloads are no longer required (as in a divestiture), the previously required capacity is easily repurposed into a general pool that can be re-provisioned for a new workload

→ Better ability to perform ongoing capacity planning

 

  • With trending and analytics to understand resource consumption, it’s possible to get ahead of capacity shortfalls by understanding when it will occur several months in advance
  • Modular upgrades (no forklift required) afford the ability to add capacity on demand, with little to no downtime

Those are strong advantages when considering convergence as the next step beyond standardizing and virtualizing. However, there are definite issues that can quickly derail a convergence project. Watch out for the following:

→ New thinking is required about traditional roles of server, storage and network systems admins

  • If you’re managing your infrastructure as a holistic system, it’s overly redundant to have admins with a singular focus on a particular infrastructure silo
  • Typically means cross training of sys admins to understand additional technologies beyond their current scope

→ Managing compute, storage, and network together adds new complexities

  • Firmware and patches/updates must be tested for inter-operability across the stack
  • Investment required either in a true converged infrastructure platform (like Vblock or Exadata) or a tool to provide software defined Data Center functionality (vCloud Director)

In part three of IT Optimization and Cloud Readiness, we will examine the OEM and software players in the infrastructure space and explore the benefits and shortcomings of converged infrastructure products, reference architectures, and build-your-own type solutions.

Photo credit: loxea on Flickr

float(3)