Cloud Computing

Review: Silver Peak Delivers Quick and Simple WAN Optimization for the Cloud

By | Cloud Computing, Networking | No Comments

In our cloud, it seems like there are two things I can never have too much of … that being compute power and bandwidth. As soon as I stand up a chassis with blades in it, a workload occupies that space. The same is with our bandwidth … as soon as we make more bandwidth available, some form of production or replication traffic scoops it up.

Introduction: Silver Peak

About 6 months ago, I started to look at Silver Peak and their VX software. I have a lot of experience in the WAN optimization area and the first couple of concerns that came to mind were ease of configuration and impact to our production environment in that what type of outages would I have to incur to deploy the solution. I did not want to have to install plug-ins for specific traffic and if I make changes, what does that do to my firewall rules?

After reviewing the product guide, I spoke with the local engineer. The technical pitch of the software seemed fairly easy:

1) Solves Capacity (Bandwidth) by Dedupe and Compression

2) Solves Latency by Acceleration

3) Solves Packet Loss by enabling Network Integrity

The point to Silver Peak is they are an “out of path” solution. I migrate what I want when I want. There are two versions of their optimization solution available. The first is the VX software that offers multi-gigabit using a virtual appliance. The second is the NX appliance, which is a physical appliance that offers up to 5Gbps throughput. Since we are a cloud provider, no stretch to guess that I tried the VX virtual appliance first.

The Deployment

After deciding what we wanted to test first, it was on to the deployment and configuration. Another cloud engineer and I decided to install and configure on our own with no assistance from technical support. I say this because I wanted to get a real sense of how difficult it was to get the virtual appliance up and running.

(By the way, technical support is more than willing to assist and Silver Peak is one of the few vendors I know that will actually give you 24×7 real technical support on their product during the evaluation period—it isn’t a “let’s call the local SE and have him call product support”, but true support if it’s needed.)

After our initial installation, turns out we did need a little support assistance, because we didn’t appear to be getting the dedupe and compression rates that I was expecting. Turns out, after the OVA file is imported to the virtual environment, I should have also set up a Network Memory disk—this is what allows the appliance to cache and provides the dedupe and compression. Since I didn’t have this turned on, the virtual server memory was used … my fault. Even with the support call, we had the virtual appliances installed and configured within 1.5 hours. If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

If I take out the support call, I can literally have the appliances downloaded, installed and configured within an hour.

What We Were Optimizing

Scenario 1

We had two scenarios we were looking at with our evaluations. The first scenario was our own backbone replication infrastructure. We replicate everything from SAN Frames, backup data and archive sets. Our test encompassed two of our datacenters and moving workloads between Chicago and Denver. My first volume was a 32TB NAS volume. Since this solution was “out of path”, I simply added a route on our filers at both ends to send traffic to the local Silver Peak appliance. When Silver Peak is first configured, a UDP tunnel is created between both appliances. Why UDP? Because UDP gets through most firewalls without very complex configurations. When the appliance receives traffic destined for the other end, the packets are optimized and forwarded along.

With this test scenario, we saw up to 8x packet reduction on the wire. That being said, we were able to replicate about 32TB of data between filers in just over 19 hours.

Scenario 2

Our second scenario was replication from a remote customer site using a 100Mbps Internet circuit connecting via a site-to-site VPN. Latency on a good day was 30ms with spikes to 100ms.

In trying to replicate the customers SAN data between their site and ours, stability was a huge issue. Our packet loss on the line was very high. High to the point that replication would stop on some of the larger datasets.

We installed the VX5000 between sites to optimize the WAN traffic. What we found was line utilization went to 100% with packet loss almost to 0%. Our packet reduction was constantly around 4x. While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

While the customer was initially hesitant to try the appliance installation, they too were surprised how quick and simple the installation was.

With the Silver Peak appliance, we changed out daily replication window from 15+ hours to less than 2 hours.


Surprises in the datacenter are never a good thing. However, being in this space for over 20 years, I have experienced my share of unexpected surprises (not many of them good.) In both our test scenarios, I can honestly say, I was surprised with Silver Peak … and in a good way. I was surprised how easy the installation and configuration was. I was surprised how well their solution worked. I was surprised how good their sales and engineering teams really are.

Photo credit: Emma and Kunley via Flickr

Cloud Services, Cloud computing, IaaS, IDS, Integrated Data Storage

Driving Change Through IaaS

By | Cloud Computing, Networking | No Comments

Cloud Services, Cloud computing, IaaS, IDS, Integrated Data Storage

Infrastructures as a Service (IaaS) is transforming the way businesses approach their computing environment. IT executives are improving cost efficiency, improving the quality of service and ultimately providing business agility by embracing this paradigm shift in computing.

On the road to IaaS, the ability to partner with a provider that can customize a hosted Private Cloud is key to providing the flexibility that allows IT decision makers to leverage the consumption-based costs models of the Cloud. With this flexibility it is key that the security, governance, and performance are maintained along with the customization that businesses require. The key to this is that there are multiple options to choose from.

Whether if organizations are looking for a turn-key solution that is fully managed or an Infrastructure where they have full control, IaaS allows for customization and flexibility.

The Cloud Infrastructure Service from IDS brings Enterprise-class, best-of-breed architecture to the Cloud. It is an ideal solution for companies that want to leverage cloud computing but need reliable, customizable solutions built by a business partner that understands their needs.

Some of the key benefits from utilizing the IDS Cloud Infrastructure Service include:

1. The IDS Cloud Infrastructure is scalable, expanding and retracting as you need it.
2. Demand for new space can be met in minutes instead of weeks or months.
3. Pay for only the space you need instead of investing in Infrastructure you may never use.
4. All IDS Cloud systems are guaranteed and proven to be as much as 99.999% reliable.
5. IDS Cloud can meet key governance standards and regulations on a client by client basis.
6. Data will be protected by 24×7 surveillance, man traps, biometric readers, key cards and multifactor authentication.

IDS Cloud Infrastructure Service is a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are offered to customers on-demand.

Photo by Hugh Llewelyn

3 Reasons Why You Need a Cloud Compliance Policy Now

By | Cloud Computing, Security | No Comments

cloud compliance policy blog - header image - smallerWhile the debate is still continuing for most as to what the “Cloud” means, the point that can’t be argued is that cloud models are already here and growing.

Whether one is talking about a fully hosted cloud model for hosting systems, networks and applications at a 3rd party provider, or looking at a hybrid model to address resource overflow or expansion, there are numerous cloud providers offering a myriad of options for one to choose from. The questions posed with these solutions follow the path from security, access, monitoring, compliance and SLAs.

As more departments within organizations look at the potential of cloud offerings, the time is here for organizations to address how to control these new resources—the reasons are no small matter.

Reason 1: Office Automation

Organizations have longed searched for ways to place standard business applications outside the organization. Document collaboration and email seemed to be a perfect fit. However, for multi-national organizations, there’s a hidden dark side.

Some countries do not allow specific types of data to leave the bounds of the country. For example, if you are a UK-based company, or an organization in the US with a UK presence, that means emails and documents containing personal client and employee information may not be replicated outside to the US. I would argue understanding the cloud provider’s model and how they move data is just as important as how they safeguard and offer redundancy within their own infrastructure. If your data is not managed and not secured as specified by the law, you could have more to answer than just the availability of your data.

“Part of our job as a cloud provider is not only to understand our customers’ data needs, but how our model impacts their business and what we can do to align the two,” states Justin Mescher, CTO of IDS.

There is not a set boilerplate of questions to ask for every given scenario. The main driver of the questions should be around the business model of the organization and how the specific needs to protect its’ data compares to what the cloud provider does with the data. If data is replicated, where is it replicated and how is it restored?

Reason 2: Test Development

One of the biggest drivers for cloud initiatives is development and testing of applications. Some developers have found it easier to develop applications in a hosted environment, rather than proceed through change control or specific documentation requesting testing resources and validation planning of applications on the corporate infrastructure.

Companies I have spoken to cite a lack of resources for their test/dev environments as being the main motivation for moving to the cloud. While this sounds like a reasonable solution to push development off to the cloud, what potentially is lacking is a sound test and validation plan to move an application from design to development to test to production.

John Squeo, Director of Strategic IT Innovation & Solutions Development at Vanguard Health Systems states, “If done properly, with the correct controls, the cloud offers us a real opportunity to quickly develop and test applications. Instead of weeks configuring infrastructure, we have cut that down to days.”

John further commented that, “While legacy Healthcare applications don’t port well to the cloud due to work flow and older hardware and OS requirements, most everything else migrates well.”

If the development group is the only group with access to the development data, the organization potentially loses its’ biggest asset … the intellectual property which put it in business in the first place. As stated above, “if done properly”, this includes a detailed life cycle testing plan, defining what the test criteria are, as well as those that have access to test applications and data.

Reason 3: Data Security

Most organizations have spent much time developing policies and procedures around information security. When data is moved off site, the controls around data security, confidentiality and integrity become even more critical.

Justin Mescher, CTO of IDS adds, “While we have our own security measures to protect both our assets, as well as our customers, we work hand in hand with our customers to ensure we have the best security footprint for their needs.”

Financial institutions have followed the “Know your customer, know your vendor” mentality for some time. Understanding the cloud providers’ security model is key to developing a long lasting relationship. This includes understanding and validating the controls they have in place for hiring support staff, how they manage the infrastructure containing your key systems and data, as well as whether or not they can deliver your required reporting. The consequences of not performing appropriate vendor oversight can lead to additional exposure and risk.

Whether your senior management is or is not planning on using the cloud, I guarantee you this: there are departments in your organization that are. The challenge is now in defining an acceptable usage and governance policy. Don’t be left on the outside and surprised one day when someone walks away with your data when you didn’t know it left in the first place.

Photo credit: erdanziehungskraft via Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

A Clear(er) Definition of Cloud Computing

By | Cloud Computing | No Comments

IT, Cloud, IDS, Integrated Data Storage, Networking,What is the Cloud? I get asked this all the time. It is part of many client meetings. My relatives ask me when we get together. My friends ask me. Heck, even my wife asked me at one point. It is probably the most common question I get asked in my life right now. It’s a little disheartening because it is my job.

I am the Director of Cloud Services and questions like this sometimes come across like “What exactly do you do?” But the confusion is understandable. The media have taken the term Cloud and made it their latest craze, threatening to bury it in a sea of hype. To make matters worse, there is not strict definition. So, inevitably my answer varies depending on the question. So, here is my attempt to define what the “Cloud” is.

Well, let’s start with our favorite place, Wikipedia.

Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet). The name comes from the common use of a cloud-shaped symbol as an abstraction for the complex infrastructure it contains in system diagrams. Cloud computing entrusts remote services with a user’s data, software and computation.

Okay! Now we are getting somewhere!

But there’s a problem. If we read this, isn’t an Exchange server at your work Cloud Computing? Well, by this definition, yes. It is a software you, as a user, use as a service that is delivered over a network, and sometimes the Internet. So wait … everything is the Cloud? Well, yes. Sort of.

What about when I buy a book from Amazon, is that the Cloud? Well, let’s see: you’re not using a computing resource, so it’s not Cloud Computing—the difference here being that second word. Remember, we are looking for the definition of the Cloud and if you remove the word Computing from the definition of Cloud Computing, you get a pretty accurate definition of the Cloud.

Cloud is the use of resources that are delivered as a service over a network (typically the Internet).

So, book buying on Amazon is using the Cloud. It’s a Cloud-based reseller. Gmail is a Cloud-email provider. Netflix is a cloud-based media provider. Progressive is a Cloud-based insurance provider and your bank has Cloud banking services. Even this this blog is a Cloud information source. It’s all Cloud. By strict definition, if you’re using the service and it isn’t running on your PC, like the Word application I am typing on, you’re using the Cloud.

It might be a private Cloud, like your work email, or a Public Cloud, like Gmail, but it is all Cloud. There are even hybrid Clouds where some features are privately owned, and others run on public resources.

Ironically, the next realization is that the Cloud is not a new idea, just a new term. The idea of internet and network-based resources has been around since … the internet and network-based resources. That is important to remember when thinking about leveraging the Cloud for your business. It is not a new idea. It is, in fact, over 25 years old as Public Cloud and 40 or more as Private Cloud. It is almost as old as the PC.

So, apparently the Cloud is not so mysterious after all. It’s on old concept (in computer years, anyway). It’s a common concept and it is a concept we already readily embrace. Now if only I could get my mother in law to read this.

Photo credits: niamor and thekellyscope

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go-To Guide For IT Optimization & Cloud Readiness, Part II

By | Cloud Computing, How To, Networking, Storage, Virtualization | No Comments

[Note: This post is the second in a series about the maturity curve of IT as it moves toward cloud readiness. Read the first post here about standardizing and virtualizing.]

I’ve met with many clients over the last several months that have reaped the rewards of standardizing and virtualizing their data center infrastructure. Footprints have shrunk from rows to racks. Power and cooling costs have been significantly reduced, while increasing capacity, uptime and availability.

Organizations that made these improvements made concerted efforts to standardize, as this is the first step toward IT optimization. It’s far easier to provision VMs, manage storage, and network from a single platform and the hypervisor is an awesome tool that creates the ability to do more with less hardware.

So now that you are standardized and highly virtualized, what’s next?

My thought on the topic is that after you’ve virtualized your Tier 1 applications like e-mail, ERP, and databases, the next step is to work toward building out a converged infrastructure. Much like cloud, convergence is a hyped technology term that means something different to every person who talks about it.

So to me, a converged infrastructure is defined as a technology system where compute, storage, and network resources are provisioned and managed as a single entity.

it optimized blog pyramid

Sounds obvious and easy, right?! Well, there are real benefits that can be gained; yet, there are also some issues to be aware of. The benefits I see companies achieving include:

→ Reducing time to market to deploy new applications

  • Improves business unit satisfaction with IT, with the department now proactively serving the business’s leaders, instead of reacting to their needs
  • IT is seen as improving organizational profitability

→ Increased agility to handle mergers, acquisitions, and divestitures

  • Adding capacity for growth can be done in a scalable, modular fashion within the framework of a converged infrastructure
  • When workloads are no longer required (as in a divestiture), the previously required capacity is easily repurposed into a general pool that can be re-provisioned for a new workload

→ Better ability to perform ongoing capacity planning


  • With trending and analytics to understand resource consumption, it’s possible to get ahead of capacity shortfalls by understanding when it will occur several months in advance
  • Modular upgrades (no forklift required) afford the ability to add capacity on demand, with little to no downtime

Those are strong advantages when considering convergence as the next step beyond standardizing and virtualizing. However, there are definite issues that can quickly derail a convergence project. Watch out for the following:

→ New thinking is required about traditional roles of server, storage and network systems admins

  • If you’re managing your infrastructure as a holistic system, it’s overly redundant to have admins with a singular focus on a particular infrastructure silo
  • Typically means cross training of sys admins to understand additional technologies beyond their current scope

→ Managing compute, storage, and network together adds new complexities

  • Firmware and patches/updates must be tested for inter-operability across the stack
  • Investment required either in a true converged infrastructure platform (like Vblock or Exadata) or a tool to provide software defined Data Center functionality (vCloud Director)

In part three of IT Optimization and Cloud Readiness, we will examine the OEM and software players in the infrastructure space and explore the benefits and shortcomings of converged infrastructure products, reference architectures, and build-your-own type solutions.

Photo credit: loxea on Flickr

Faster and Easier: Cloud-based Disaster Recovery Using Zerto

By | Cloud Computing, Disaster Recovery, How To, Replication, Virtualization | No Comments

Is your Disaster Recovery/Business Continuity plan ready for the cloud? Remember the days when implementing DR/BC meant having identical storage infrastructure at the remote site? The capital costs were outrageous! Plus, the products could be complex and time consuming to setup.

Virtualization has changed the way we view DR/BC. Today, it’s faster and easier than ever to setup. Zerto allows us to implement replication at the hypervisor layer. It is purpose built for virtual environments. The best part: it’s a software-only solution that is array agnostic and enterprise class. What does that mean? Gone are the days of having an identical storage infrastructure at the DR site. Instead, you replicate to your favorite storage—it doesn’t matter what you have. It allows you to reduce hardware costs by leveraging existing or lower-cost storage at the replication site.

zerto visio graphic

How does it work? You install the Zerto Virtual Manager on a Windows server at the primary and remote sites. Once installed, the rest of the configuration is completed through the Zerto tab in VMware vCenter. Simply select the Virtual Machines you want to protect and that’s about it. It supports fully automated failover and failback and the ability to test failover, while still protecting the production environment. Customers are able to achieve RTOs of minutes and RPOs of seconds through continuous replication and journal-based, point-in-time recovery.

Not only does Zerto protect your data, it also provides complete application protection and recovery through virtual protection groups.

Application protection:

  • Fully supports VMware VMotion, Storage VMotion, DRS, and HA
  • Journal-based point-in-time protection
  • Group policy and configuration
  • VSS Support

Don’t have a replication site? No problem. You can easily replicate your VMs to a cloud provider and spin them up in the event of a disaster.

Photo credit: josephacote on Flickr

IT, Cloud, IDS, Integrated Data Storage, Networking,

Your Go To Guide For IT Optimization & Cloud Readiness, Part I

By | Cloud Computing, How To, Networking, Virtualization | No Comments

As an Senior IT Engineer, I spend a lot of time in the field talking with current or potential clients. Over the last two years I began to see a trend in questions that company decision makers were asking and this revolves around developing and executing the right cloud strategy for their organization.

With all the companies I’ve worked with, there are three major areas that C-level folks routinely inquire about and those topics include reducing cost, improving operations and reducing risk. Over the years I’ve learned that an accurate assessment of the organization is imperative as it’s a valuable key to understand the current state of the companies IT infrastructure, people and processes. When discovering these key items of an organization, I’ve refined the following framework to help decision makers effectively become cloud ready.

Essentially IT infrastructure optimization and cloud readiness adhere to the same maturity curve, moving upstream from standardized to virtualized/consolidated and then converged.  From there, the remaining journey is about automation and orchestration.  It ultimately depends on where an organization currently resides. Within that framework it will dictate my recommendations for tactical next steps to reach more strategic goals.

IT, IT Optimization, Cloud, Cloud Readiness, IT Cloud, Cloud GuideStandardization is the first topic which needs to be explored as that is the base of all business operations and directions. The main drive to standardize is in efforts to reduce the number of server and storage platforms in the data center.

The more operating systems and hardware management consoles your administrators need to know, the less efficient they become.  There’s little use for Windows Server 2003 expertise in 2013 and it is important to find a way to port the app to your current standard.  The fewer standards your organization can maintain, the fewer the variables exist when trouble shooting issues. Ultimately, fewer standards will allow you to return to IT to focus on initiatives essential to the business.  Implementing asset life-cycle policies can limit costly maintenance on out of warranty equipment and ensure your organization is always taking advantage of advances in technology.

After implementing a higher degree of standardization, organizations are better equipped to take the next step by moving to a highly virtualized state and by greatly reducing the amount of physical infrastructure that’s required to serve the business.  By now most everyone has at least leveraged virtualization to some degree.  The ability to consolidate multiple physical servers onto a single physical host dramatically reduces IT cost as an organization can provide all required compute resources on far fewer physical servers.

I know this because I’ve worked with several organizations who’ve experienced consolidation ratios of 20-1 or greater.  One client I’ve worked with has extensively reduced their data center footprint, migrating 1200 physical servers onto 55 total virtual hosts. While the virtual hosts tend to be much more robust than the typical physical application server, the cost avoidance is undeniable.  The power savings from decommissioning 1145 servers at their primary data center came to over $1M in the first year alone.

It is also important to factor in cooling and a 3 year refresh cycle that will require a 1100+ servers to be purchased as the savings start to add up quickly.  In addition to the hard dollar cost savings, virtualization produces additional operational benefits.  Business continuity and disaster recovery exposure can be mitigated by using high availability and off site replication functionality embedded into today’s hypervisors.  Agility to the business can increase as well, as time required to provision a virtual server on an existing host is typically weeks to months faster than what’s required to purchase, receive, rack, power, and configure a physical server.

Please look for Part II of “Your Guide To IT Optimization & Cloud Readiness” as Mr. Rosenblum breaks down Convergence and Automation.

photo by “reway2007

cloud, cloud it, it server, it, it services

Before You Sign: Top 5 Questions To Ask Cloud Providers

By | Cloud Computing | No Comments

cloud, cloud it, it server, it, it servicesThe trend I’m seeing among organizations is that they no longer want to be in the business of managing infrastructure—they would much rather be managing their business applications. As more businesses are transforming, more customers are getting trapped by the hype of marketing campaigns, online advertisements and television commercials around cloud services. Even though moving to the cloud can be an easy transition, it is not as simple as swiping a credit card. I caution you buyer… beware!

There are many topics that need to be taken into consideration when taking the jump into the cloud. Over the course of my career, I’ve had the privilege to speak with many customers who are looking to expand their businesses to this new technology. Through these countless discussions, I began to notice both positive and negative trends on how decision makers prepare for this important conversion.

To help customers travel through this technological transition, I’ve compiled a list of important questions you should ask before you chose to take your business to the cloud.

Top Five Questions To Ask Your Cloud Services Provider

If you decide to make the move to the cloud, it doesn’t mean you need to sacrifice security or accountability to your organization. The key is asking questions up front. Do your homework (or have someone do it for you) and treat your applications with the same expectations as if they are inside the walls of your organization.

1. What are the SLAs for my infrastructure and are these SLAs negotiable?
    a. Who pays the price if outages occur?

Before issues arise, determine who will front the bill if/when outages occur. It is much easier doing damage control when there is a mutual understanding between you (the customer) and the provider. During times of outage it is a priority to get information back online, as opposed to arguing over who pays for additional costs associated with the problem.

2. What visibility to my service is provided to me?
    a. What are my service response times?
    b. What reports do I receive on a monthly basis?
    c. What is the escalation process if I encounter an issue?

Many providers are quick to take your money; yet, you must create a contract filled with prearranged obligations before the deal is finalized. Getting these details in writing will help set clear expectations and hold the provider accountable throughout your partnership.

3. What technologies do you run your services on?

Even though this seems like a no brainer, many customers assume that IT providers have professional facilities. But how do you know if you don’t inquire? This is one of the most over looked and basic questions that need to be asked early in the assessment process. How do you know the provider isn’t running the infrastructure out of their mother’s basement? Ask!

4. What about my data?
    a. Where is my data being stored?
    b. Is my data protected?
    c. Can my data be encrypted?

These several questions will get the provider talking about the specifics of how your data is being handled. It sounds like these should be common; yet, time and time again, customers simply don’t ask these important questions—which sets them up for disaster.

5. Which applications stay and which go?
    a. Do all infrastructures have to be removed?

When embarking on the journey to the cloud, a realistic expectation to set is that some infrastructure might remain on premise. Although applications have become much more portable than before, there are still applications that may not make sense to move.

A number of reasons why this makes sense could be application related, cost associated or business related. The best approach to determining what makes applications “cloud ready” is to engage in a cloud readiness assessment.

There are many companies that do cloud readiness assessments. You should make sure the deliverables are meaningful to the evaluation.

[framed_box bgColor=”#D6D4D4″]Top priorities of the assessment should include:

Application Requirements

Before taking the leap into the cloud, do your homework to ensure requirements are met to run specific applications. By asking these five questions up front, you ensure seamless transitions, as well as an enjoyable user experiences down the road.

Photo credit @JesseGardner’s

Cloud, Server,

Embracing The Cloud To Survive Change

By | Cloud Computing, Storage | No Comments

I am deeply involved in technology. I find it fascinating to see how technology rapidly changes our world like some freewheeling bulldozer plowing an uncaring path right through the center of society.

I was sitting in an airport over the Holidays waiting for my plane to arrive and I began thinking about the death of the travel agent.

I will admit that I am old enough to remember when travel involved travel agencies. You could call the airlines for a reservation, but it took forever and the prices you received were usually worse than the prices from a travel agent. Your best bet was to call a travel agent who would coordinate everything for you.

<blockquote>Businesses are beginning to realize that they can save a lot of money, improve flexibility, agility and capability by embracing the cloud.</blockquote>

Ironically when you called a travel agent, the reason they could get you these great deals was because they had a computer with special access to airline reservations. They could see all the flights and available hotel rooms. They had special pricing based on what they could see and how much they sold. The very thing that they were leveraging to make them profitable, in truth to make them exist, would also be the same thing that would show up ten years later and demolish their business. The one doing the bulldozing is the cloud.

While the term cloud is relatively new, the concept isn’t. Companies like Travelocity and Orbitz are just cloud based travel agents. Today, these cloud services are what we use to virtually book travel. We have become our own Travel Agents and the job of the professional Travel Agent has been bulldozed.

That is not to say Travel Agents don’t exist, they do. But now they are boutique shops servicing special needs like exotic foreign travel. There are also travel agents inside companies to control costs and increase convenience. But those agents largely leverage the same tools you and I have access to. They are large scaled cloud users.

to the cloud, cloud services, cloud computing, cloud hosting, google cloud, cloud server, cloud, the cloud

So here I am in Austin Airport, surrounded by people whose lives, other than the former Travel Agents, have been made fundamentally better, more flexible, more agile and more cost effective all because of the cloud. This got me thinking about IDS and its cloud offerings. While on the surface they don’t seem related, underneath they are exactly the same. IT is becoming a commodity.

Businesses are beginning to realize that they can save a lot of money, improve flexibility, agility and capability by embracing the cloud. There will always be the special scenarios, yet the majority of IT is not unique. The bulldozer is revving up and it’s coming after traditional IT services.

IDS is already there in the cloud, ready to help companies leverage this new service and move them from becoming victims, in front of the oncoming blade, to being in the driver’s seat, shaping the future.

[Additional reading: “My Personal Journey To The Cloud” written by IDS CTO Justin Mescher.]

Photos by: @ExtraMedium and @Salicia

Why, Oh Why To Do VDI ?

By | Cloud Computing, Security, Storage, View, Virtualization, VMware | No Comments

I recently became a Twit on Twitter, and have been tweeting about my IT experiences with several new connections. In doing so, I came across a tweet about a contest to win some free training, specifically VMware View 5 Essentials from @TrainSignal – sweet!

Below is a screen capture of the tweet:


A jump over to the link provided in the tweet – explains that one or all of the below questions should be commented on in the blog post – in order to win. Instead of commenting on that blog, why not address ALL of the questions in my own blog article at IDS?!  Without further ado, let’s jump right in to the questions:

Why are Virtual Desktop technologies important nowadays, in your opinion?

Are you kidding me?!

If you are using a desktop computer, workstation at work or a laptop at home/work – you are well aware that technology moves so fast, updated versions are released as soon as you buy a “new” one. Not to mention the fact usually laptops are already configured with what the vendor or manufacturer thinks you should be using, not what is best, more efficient or fastest. More times than not, you are provided with what someone else thinks is best for the user. The reality is that only you – the user – knows what you need and if no one bothers to ask you, there can be a feelings of being trapped, having no options, or resignation, which all tend to lead to the dreaded “buyer’s remorse.”

When you get the chance to use a virtual desktop, you finally get a “tuned-in” desktop experience similar to or better than the user experience that you have on the desktop or laptop from Dell, HP, IBM, Lenovo, Gateway, Fujitsu, Acer and so on.

Virtual desktops offer a “tuned” experience because architects design an infrastructure and solution from the operating system in the virtual desktop, be it Windows XP to Windows 7; soon to be Windows 8, to the right amount of virtual CPUs (vCPUs), capacity of  guest memory, disk IOPS, network IOPS and everything else that you wouldn’t want to dive into the details of. A talented VDI Architect will consider every single component when designing  a virtual desktop solution because the user experience matters – there is no selling them on the experience “next time.” Chances are if you have a negative experience the first time, you will never use a virtual desktop again, nor will you have anything good to say when the topic comes up at your neighborhood barbecue or pool party.

The virtual desktop is imparitive because it drives the adoption of heads up displays (HUD) in vehicles, at home and the workplace, as well as slimmer interface tablet devices. Personally, when I think about the future of VDI I envision expandable OLED flex screens that will connect wirelessly to private or public cloud based virtual desktops with touch-based (scratch-resistant) interfaces that connect to private cloud based virtual desktops. The virtual desktop is the next  frontier, leaving behind the antiquated desktop experience that has been dictated to the consumer by vendors and manufacturers that simply does not give us what is needed the first time.

What are the most important features of VDI in your opinion?

Wow, the best features of VDI require a VIP membership into the exclusive VDI community. Seriously though, the users and IT Support staff are the last to know the most important features, but the users and IT Support are the first to be impacted when a solution is architected because those two groups of people are the most in lock-step with the desktop user experience.

The most effective way for me to leave a lasting impression is to lay out the most important features out in a couple of bullet statements:

  • Build a desktop in under 10 minutes –  how about 3-minutes?
  • Save personal settings and recover personal desktop settings, immediately after rebuilding a desktop.
  • Increased speed by which more CPU or RAM can be added to a virtual desktop.
  • Recovery from malware, spyware, junkware, adware, trojans, viruses, everything-ware – you can save money by just rebuilding in less than 10-minutes.
  • Access to the desktop from anywhere, securely.
  • It just works, like your car’s windshield!

That last point brings me to the most important part of VDI, that when architected, implemented and configured properly, it just works. My mantra in technology is “Technology should just work, so you don’t have to think about technology, freeing you up to just do what you do best!”

What should be improved in VDI technologies that are now on the market?

The best architects, solution providers and companies are the best because they understand the current value of a solution, in this case VDI, as well as the caveats and ask themselves this exact question. VDI has very important and incredibly functional features, but there is a ton of room for improvement.

So, let me answer this one question with two different hats on – one hat being a VDI Architect and the other hat being a VDI User. My improvement comments are based on the solution provided by VMware as I am most familiar with VMware View.  In my opinion, there is no other vendor in the current VDI market who can match the functionality, ease of management and speed that VMware has with the VMware View solution.

As a VDI Architect, I am looking for VMware to improve their VMware View product by addressing the below items:

  • Separate VMware View Composer from being on the VMware vCenter Server.
  • Make ALL of the VMware View infrastructure applications, appliances and components 64-bit.
  • Figure out and support Linux-based linked-clones. (The Ubuntu distribution is my preference.)
  • Get rid of the VMware View Client application – this is 2012.
  • Provide a fully functional web-based or even .hta based access to the VMware View virtual desktop that is secure and simple.
  • Build database compatibility with MySQL, so there is a robust FREE alternative to use.
  • Build Ruby-on-Rails access to manage the VMware View solution and database. Flash doesn’t work on my iPad!

As a VDI User, I am looking for VMware to improve:

  • Access to my virtual desktop, I hate installing another application that requires “administrator” rights.
  • Fix ThinPrint and peripheral compatibility or provide a clearer guide for what is supported in USB redirection.
  • Support USB 3.0 – I don’t care that my network or Internet connection cannot handle the speed – I want the sticker that says that the solution is USB 3.0 compatible and that I could get those speeds if I use a private cloud based VDI solution.
  • Tell me that you will be supporting the Thunderbolt interface and follow through within a year.
  • Support web-cams, I don’t want to know about why it is difficult, I just want it to work.
  • Support Ubuntu Linux-based virtual desktops.

In summary, you never know what you will find when using social media. The smallest of tweets or the longest of blog articles can elicit a thought that will provoke either a transformation in process or action in piloting a solution. If you are looking to pilot a VDI solution, look no further… shoot me an email or contact Integrated Data Storage to schedule a time to sit down and talk about how we can make technology “just work” in your datacenter!  Trust me when I say, your users will love you after you implement a VDI solution.

Photo Credit: colinkinner