The Hotel California Dilemma with Hyperscale Cloud

The Hotel California Dilemma with Hyperscale Cloud

At IDS, we are constantly speaking with IT organizations about our IDS Cloud offering and the marketplace in general. For those of you who don’t know, three years ago we built our own IDS Cloud offering based on the latest FlexPod and Vblock technology. Our Cloud is available with roughly 4PB of storage across geographically-dispersed Data Centers. This was a massive investment for IDS that resulted in a valuable ongoing opportunity for our team. I think everyone at IDS has learned many lessons over the past year surrounding the value of our offering versus the Hyperscale Cloud providers like Amazon and Azure. These lessons have shaped our ability to offer a truly valuable cloud offering to our customers.

Finding the Sweet Spot Between Hyperscale and Private Cloud Providers

For IDS, the struggle is how to find the sweet spot in a crowded market. With Hyperscale Cloud providers pouring billions of dollars per month in build out, we are left with a few burning questions:

  • How can we compete?
  • What is our position?
  • How do we create a valuable offering for our customers?

One area we’ve started to evaluate surrounds providing customers complete data control within a cloud-based approach. Our customers echo one major concern when considering moving towards Hyperscale Cloud providers: How do I get my data back from the cloud if I choose to leave my provider?

The migration process from a Hyperscale provider is usually very time consuming and requires a potentially massive bandwidth investment. Making matters worse, the service platforms offered from one provider to another are not consistent. It’s like going to the Hotel California, “You can check out anytime you like, but you can never leave.”

It’s like going to the Hotel California, “You can check out anytime you like, but you can never leave.”

Is it Possible to Leverage Hyperscale Cloud and Retain Control?

Even with the downsides, our customers are looking to utilize these Hyperscale cloud providers for burstability during peak workload times, disaster recovery and dev/test environments. So how do we create value for our customers in providing portability between Hyperscale cloud providers so they can enjoy the economies of the compute and service catalogue, yet at the same time provide control and security around their data that companies require today?

One option that has recently come available is the ability to connect our existing datacenter(s) into the backend of Azure or EC2 via direct connections. This service allows us to setup a direct connection into the Hyperscale cloud providers that is private and within very close proximity of their datacenters, resulting in a reduction of latency to around 1ms. With this type of hybrid solution, our customers can leverage the massively-scalable Azure or EC2 computer power, while keeping their data private and under their own control.

With this type of hybrid solution, our customers can leverage the massively-scalable Azure or EC2 computer power, while keeping their data private and under their own control.

Another interesting aspect of this hybrid design is that we can replicate our customers’ data from our datacenters or theirs utilizing NetApp’s native SnapMirror software and setup different service tiers for our clients. If customers want to run production from our Latisys facility in Chicago, they can replicate that data to our Equinix datacenter in Ashburn, VA, and provide a controlled test/dev environment to their users out of EC2, for instance. This allows the customer to completely control their data out of a separate facility and serve that up across the private connection to EC2.  If they choose to migrate due to cost or service offerings to Azure, there is no data migration required, as they point the new servers at their storage in Equinix.

The end result for the client is CONTROL and the flexibility to change/adapt as their needs change.