Why VDI Is So Hard and What To Do About It

By | EMC, View, Virtualization | No Comments

Rapid consumerization, coupled with the availability of powerful, always-connected mobile devices and the capability for anytime, anywhere access to applications and data is fundamentally transforming the relationship of IT and the end user community for most of our customers.

IT departments are now faced with the choice to manage an incredible diversity of new devices and access channels, as well as the traditional desktops in the old way, or get out of the device management business and instead deliver IT services to the end-user in a way that aligns with changing expectations. Increasingly, our customers are turning to server-hosted virtual desktop solutions—which provide secure desktop environments accessible from nearly any device—to help simplify the problem. This strategy, coupled with Mobile Device Management tools, helps to enable BYOD and BYOC initiatives, allowing IT to provide a standardized corporate desktop to nearly any device while maintaining control.

However, virtual desktop infrastructure (VDI) projects are not without risk. This seems to be well understood, because it’s been the “year of the virtual desktop” for about four years now (actually, I’ve lost count). But we’ve seen and heard of too many VDI projects that have failed due to an imperfect understanding of the related design considerations or a lack of data-driven, fact-based decision making.

There is really only one reason VDI projects fail: The provided solution fails to meet or exceed end-user expectations. Everything else can be rationalized – for example as an operational expense reduction, capital expense avoidance, or security improvement. But a CIO who fails to meet end user expectations will either have poor adoption, decreased productivity, or an outright mutiny on his/her hands.

Meeting end-user expectations is intimately related to storage performance. That is to say, end user expectations have already been set by the performance of devices they have access to today. That may be a corporate desktop with a relatively slow SATA hard drive or a MacBook Air with an SSD drive. Both deliver dedicated I/O and consistent application latency. Furthermore, the desktop OS is written with a couple of salient underlying assumptions – that the OS doesn’t have to be a “nice neighbor” in terms of access to CPU, Memory, or Disk, and that the foreground processes should get access to any resources available.

Contrast that with what we’re trying to do in a VDI environment. The goal is to cram as many of these resource-hungry little buggers on a server as you can in order to keep your cost per desktop lower than buying and operating new physical desktops.

Now, in the “traditional” VDI architecture, the physical host must access a shared pool of disk across a storage area network, which adds latency. Furthermore, those VDI sessions are little resource piranhas (Credit: Atlantis Computing for the piranhas metaphor). VDI workloads will chew up as many as IOPS as you throw at them with no regard for their neighbors. This is also why many of our customers choose to purchase a separate array for VDI in order to segregate the workload. This way, VDI workloads don’t impact the performance of critical server workloads!

But the real trouble is that most VDI environments we’ve evaluated average a whopping 80% random write at an average block size of 4-8K.

So why is this important? In order to meet end-user expectations, we must provide sufficient IO bandwidth at sufficiently low latency. But most shared storage arrays should not be sized based on front-end IOPS requirements. They must be sized based on backend IOPS and it’s the write portion of the workload which suffers a penalty.

If you’re not a storage administrator, that’s ok. I’ll explain. Due to the way that traditional RAID works, a block of data can be read from any disk on which it resides, whereas for a write to happen, the block of data must be written to one or more disks in order to ensure protection of the data. RAID1, or disk mirroring, suffers a write penalty factor of 2x because the writes have to happen on two disks. RAID5 suffers a write penalty of 4x because for each change to the disk, we must read the data, read the parity information, then write the data and write the parity to complete one operation.

Well, mathematically this all adds up. Let’s say we have a 400 desktop environment, with a relatively low 10 IOPS per desktop at 20% read. So the front-end IOPS at steady state would be:

10 IOPS per desktop x 400 Desktops = 4000 IOPS

 If I was using 10k SAS drives at an estimated 125 IOPS per drive, I could get that done with an array of 32 SAS drives. Right?

Wrong. Because the workload is heavy write, the backend IOPS calculation for a RAID5 array would look like this:

(2 IOPS read x 400 desktops) + (8 IOPS write x 400 desktops x 4 R5 Write Penalty) IOPS

This is because 20% of the 10 IOPS are read and 80% of the IOPS are write. So the backend IOPS required here is 13,600. On those 125 IOPS drives, we’re now at 110 drives (before hot-spares) instead of 32.

But all of the above is still based on this rather silly concept that our users’ average IOPS is all we need to size for. Hopefully we’ve at least assessed the average IOPS per user rather than taking any of the numerous sizing assumptions in vendor whitepapers, e.g. Power Users all consume 12-18 IOPS “steady state”. (In fairness, most vendors will tell you that your mileage will vary.)

Most of our users are used to at least 75 IOPS (a single SATA drive) dedicated to their desktop workload. Our users essentially expect to have far more than 10 IOPS available to them should they need it, such as when they’re launching Outlook. If our goal is a user experience on par with physical, sizing to the averages is just not going to cut it. So if we use this simple sizing methodology, we need to include at least 30% headroom. So we’re up to 140 disks on our array for 400 users assuming traditional RAID5. This is far more than we would need based on raw capacity.

The fact is that VDI workloads are very “peaky.” A single user may average 12-18 IOPS once all applications are open, but opening a single application can consume hundreds or even thousands of IOPS if it’s available. So what happens when a user comes in to the office, logs in, and starts any application that generates a significant write workload—at the same time everyone else is doing the same? There’s a storm of random reads and writes on your backend, your application latency increases as the storage tries to keep up, and bad things start to happen in the world of IT.

So What Do We Do About It?

I hope the preceding discussion gives the reader a sense of respect for the problem we’re trying to solve. Now, let’s get to some ways it might be solved cost-effectively.

There are really two ways to succeed here:

1)    Throw a lot of money at the storage problem, sacrifice a goat, and dance in a circle in the pale light of the next full moon [editor’s notes: a) IDS does not condone animal sacrifice and b) IDS recommends updating your resume LinkedIn Profile in this case];

2)    Assess, Design, and Deliver Results in a disciplined fashion.

Assess, Don’t Assume

The first step is to Assess. The good news is that we can understand all of the technical factors for VDI success as long as we pay attention to end user as well as administrator experience. And once we have all the data we need, VDI is mostly a math problem.

Making data-driven fact-based decisions is critical to success. Do not make assumptions if you can avoid doing so. Sizing guidelines outlined in whitepapers, even from the most reputable vendors, are still assumptions if you adopt them without data.

You should always perform an assessment of the current state environment. When we assess the current state from a storage perspective, we are generally looking for at least a few metrics, categorized by a user persona or use case.

  • I/O Requirements (I/O per Second or IOPS)
  • I/O Patterns (Block Size and Read-to-Write Ratio)
  • Throughput
  • Storage Latency
  • Capacity Requirements (GB)
  • Application Usage Profiles

Ideally, this assessment phase involves a large statistical set and runs over a complete business cycle (we recommend at least 30 days). This is important to develop meaningful average and peak numbers.

Design for Success

There’s much more to this than just storage choices and these steps will depend upon your choice of hypervisor and virtual desktop management software, but as I put our fearless VDI implementers up a pretty big tree earlier with the IOPS and latency discussion, let’s resolve some of that.

Given the metrics we’ve gathered above, we can begin to plan our storage environment. As I pointed out above this is not as simple as multiplying the number of users times the average I/O. We also cannot size based only on averages – we need at least 30% headroom.

Of course, while we calculated the number of disks we’d need to service the backend IOPS requirements in RAID5 above, we’d look at improved storage capabilities and approaches to reduce the impact of this random write workload.

Solid State Disks

Obviously, Solid State Disks offer over 10 times the IOPS per disk than spinning disks, at greatly reduced access times due to the fact that there are no moving parts. If we took the 400 desktop calculation above and used a 5000 IOPS SSD drive as the basis for our array we’d need very few to service the IOPS.

Promising. But there are both cost and reliability concerns here. The cost per GB on SSDs is much higher and write endurance on an SSD drive is finite. (There have been many discussions of MLC, eMLC, and SLC write endurance, so we won’t cover that here).

Auto-Tiering and Caching

Caching technologies can certainly provide many benefits, including reducing the number of spindles needed to service the IOPS requirements and latency reduction.

With read caching, certain “hot” blocks get loaded into an in-memory cache or more recently, an flash-based tier. When the data is requested, instead of having to seek the data on spindles, which can incur tens of milliseconds of latency, the data is available in memory or on a faster tier of storage. So long as the cache is intelligent enough to cache the right blocks, there can be a large benefit for the read portion of the workload. Read caching is a no-brainer. Most storage vendors have options here and VMware offers a host-based Read Cache.

But VDI workloads are more write intensive. This is where write buffering comes in.

Most storage vendors have write buffers serviced by DRAM or NVRAM. Basically, the storage system acknowledges the write before the write is sent to disk. If the buffer fills up, though, latency increases as the cache attempts to flush data out to the relatively slow spinning disk.

Enter the current champion in this space, EMC’s FAST Cache, which alleviates some concerns around both read I/O and write I/O.  In this model Enterprise Flash is used to extend a DRAM Cache, so if the spindles are too busy to deal with all the I/O, the extended cache is used. Benefits to us: more content in the read cache and more writes in the buffer waiting to be coalesced and sent to disk. Of course, it’s rather more complex than that, but you get the idea.

EMC FAST Cache is ideal in applications in which there is a lot of small block random I/O – like VDI environments – and where there’s a high degree of access to the same data. Without FAST Cache, the benefit of the DRAM Cache alone is about 20%. So 4 out of every 5 I/Os has to be serviced by a slow spinning disk. With FAST Cache enabled, it’s possible to reduce the impact of read and write I/O by as much as 90%. That case would be if the FAST Cache is dedicated to VDI and all of the workloads are the largely the same. Don’t assume that this means you can leverage your existing mixed workload array without significant planning.

Ok, so if we’re using an EMC VNX2 with FAST Cache and this is dedicated only to VDI, we hope to obtain a 90% reduction of back-end write IO. Call me conservative, but I think we’ll dial that back a bit for planning purposes and then test it during our pilot phase to see where we land. We calculated 12,800 in backend write IO earlier for 400 desktops. Let’s say we can halve that. We’re now at 7200 total IOPS for 400 VDI desktops. Not bad.

Hybrid and All-Flash Arrays

IDS has been closely monitoring the hybrid-flash and all-flash array space and has selected solutions from established enterprise vendors like EMC and NetApp as well as best-of-breed newer players like Nimble Storage and Pure Storage.

The truly interesting designs recognize that SSDs should not be used as if they are traditional spinning disks. Instead these designs optimize the data layout for write. As such, even though they utilize RAID technology, they do not incur a meaningful write penalty, meaning that it’s generally pretty simple to size the array based on front-end IOPS. This also reduces some of the concern about write endurance on the SSDs. When combined with techniques which both coalesce writes and compress and de-duplicate data in-line, these options can be attractive on a cost-per-workload basis even though the cost of Flash remains high.

Using a dedicated hybrid or flash-based array would get us to something like a single shelf needed for 400 users. At this point, we’re more sizing for capacity than I/O and latency, a situation that’s more familiar to most datacenter virtualization specialists. But we’re still talking about an approach with a dedicated array at scale.

Host-Based Approaches

A variety of other approaches to solving this problem have spring up, including the use of host-based SSDs to offload portions of the IO, expensive Flash memory cards providing hundreds of thousands of I/O’s per card, and software approaches such as Atlantis Computing’s ILIO virtual appliances which leverage relatively inexpensive system RAM as a low-latency de-duped data store and functionally reduce VDI’s impact on existing storage.  (Note: IDS is currently testing the Atlantis Computing solution in our Integration Lab).

Design Conclusion

Using a combination of technology approaches, it is now possible to provide VDI user experience that exceeds current user expectations at a cost per workload less than the acquisition cost of a standard laptop. The server-hosted VDI approach has many benefits in terms of operational expense reduction as well as data security.

Delivering Results

In this article, we’ve covered one design dimension that influences the success of VDI projects, but there’s much more to this than IOPS and latency. A disciplined engineering and delivery methodology is the only way to deliver results reliably for your VDI project. At minimum, IDS recommends testing your VDI environment at scale using tools such as LoginVSI or View Planner as well as piloting your solution with end user champions.

Whether you’re just getting started with your VDI initiative, or you’ve tried and failed before, IDS can help you achieve the outcomes you want to see. Using our vendor-agnostic approach and disciplined methodology, we will help you reduce cost, avoid business risk, and achieve results.

We look forward to helping you.


Photo credit: linademartinez via Flickr

Why, Oh Why To Do VDI ?

By | Cloud Computing, Security, Storage, View, Virtualization, VMware | No Comments

I recently became a Twit on Twitter, and have been tweeting about my IT experiences with several new connections. In doing so, I came across a tweet about a contest to win some free training, specifically VMware View 5 Essentials from @TrainSignal – sweet!

Below is a screen capture of the tweet:


A jump over to the link provided in the tweet – explains that one or all of the below questions should be commented on in the blog post – in order to win. Instead of commenting on that blog, why not address ALL of the questions in my own blog article at IDS?!  Without further ado, let’s jump right in to the questions:

Why are Virtual Desktop technologies important nowadays, in your opinion?

Are you kidding me?!

If you are using a desktop computer, workstation at work or a laptop at home/work – you are well aware that technology moves so fast, updated versions are released as soon as you buy a “new” one. Not to mention the fact usually laptops are already configured with what the vendor or manufacturer thinks you should be using, not what is best, more efficient or fastest. More times than not, you are provided with what someone else thinks is best for the user. The reality is that only you – the user – knows what you need and if no one bothers to ask you, there can be a feelings of being trapped, having no options, or resignation, which all tend to lead to the dreaded “buyer’s remorse.”

When you get the chance to use a virtual desktop, you finally get a “tuned-in” desktop experience similar to or better than the user experience that you have on the desktop or laptop from Dell, HP, IBM, Lenovo, Gateway, Fujitsu, Acer and so on.

Virtual desktops offer a “tuned” experience because architects design an infrastructure and solution from the operating system in the virtual desktop, be it Windows XP to Windows 7; soon to be Windows 8, to the right amount of virtual CPUs (vCPUs), capacity of  guest memory, disk IOPS, network IOPS and everything else that you wouldn’t want to dive into the details of. A talented VDI Architect will consider every single component when designing  a virtual desktop solution because the user experience matters – there is no selling them on the experience “next time.” Chances are if you have a negative experience the first time, you will never use a virtual desktop again, nor will you have anything good to say when the topic comes up at your neighborhood barbecue or pool party.

The virtual desktop is imparitive because it drives the adoption of heads up displays (HUD) in vehicles, at home and the workplace, as well as slimmer interface tablet devices. Personally, when I think about the future of VDI I envision expandable OLED flex screens that will connect wirelessly to private or public cloud based virtual desktops with touch-based (scratch-resistant) interfaces that connect to private cloud based virtual desktops. The virtual desktop is the next  frontier, leaving behind the antiquated desktop experience that has been dictated to the consumer by vendors and manufacturers that simply does not give us what is needed the first time.

What are the most important features of VDI in your opinion?

Wow, the best features of VDI require a VIP membership into the exclusive VDI community. Seriously though, the users and IT Support staff are the last to know the most important features, but the users and IT Support are the first to be impacted when a solution is architected because those two groups of people are the most in lock-step with the desktop user experience.

The most effective way for me to leave a lasting impression is to lay out the most important features out in a couple of bullet statements:

  • Build a desktop in under 10 minutes –  how about 3-minutes?
  • Save personal settings and recover personal desktop settings, immediately after rebuilding a desktop.
  • Increased speed by which more CPU or RAM can be added to a virtual desktop.
  • Recovery from malware, spyware, junkware, adware, trojans, viruses, everything-ware – you can save money by just rebuilding in less than 10-minutes.
  • Access to the desktop from anywhere, securely.
  • It just works, like your car’s windshield!

That last point brings me to the most important part of VDI, that when architected, implemented and configured properly, it just works. My mantra in technology is “Technology should just work, so you don’t have to think about technology, freeing you up to just do what you do best!”

What should be improved in VDI technologies that are now on the market?

The best architects, solution providers and companies are the best because they understand the current value of a solution, in this case VDI, as well as the caveats and ask themselves this exact question. VDI has very important and incredibly functional features, but there is a ton of room for improvement.

So, let me answer this one question with two different hats on – one hat being a VDI Architect and the other hat being a VDI User. My improvement comments are based on the solution provided by VMware as I am most familiar with VMware View.  In my opinion, there is no other vendor in the current VDI market who can match the functionality, ease of management and speed that VMware has with the VMware View solution.

As a VDI Architect, I am looking for VMware to improve their VMware View product by addressing the below items:

  • Separate VMware View Composer from being on the VMware vCenter Server.
  • Make ALL of the VMware View infrastructure applications, appliances and components 64-bit.
  • Figure out and support Linux-based linked-clones. (The Ubuntu distribution is my preference.)
  • Get rid of the VMware View Client application – this is 2012.
  • Provide a fully functional web-based or even .hta based access to the VMware View virtual desktop that is secure and simple.
  • Build database compatibility with MySQL, so there is a robust FREE alternative to use.
  • Build Ruby-on-Rails access to manage the VMware View solution and database. Flash doesn’t work on my iPad!

As a VDI User, I am looking for VMware to improve:

  • Access to my virtual desktop, I hate installing another application that requires “administrator” rights.
  • Fix ThinPrint and peripheral compatibility or provide a clearer guide for what is supported in USB redirection.
  • Support USB 3.0 – I don’t care that my network or Internet connection cannot handle the speed – I want the sticker that says that the solution is USB 3.0 compatible and that I could get those speeds if I use a private cloud based VDI solution.
  • Tell me that you will be supporting the Thunderbolt interface and follow through within a year.
  • Support web-cams, I don’t want to know about why it is difficult, I just want it to work.
  • Support Ubuntu Linux-based virtual desktops.

In summary, you never know what you will find when using social media. The smallest of tweets or the longest of blog articles can elicit a thought that will provoke either a transformation in process or action in piloting a solution. If you are looking to pilot a VDI solution, look no further… shoot me an email or contact Integrated Data Storage to schedule a time to sit down and talk about how we can make technology “just work” in your datacenter!  Trust me when I say, your users will love you after you implement a VDI solution.

Photo Credit: colinkinner

Don’t Get Hung Out To Dry With The HCL: There’s OneCommand Manager for VMware vCenter …

By | Cisco, How To, View, VMware, vSphere | No Comments

Is nothing sacred?

As the professionally paranoid, we know all too well that we cannot take anything for granted when deploying a new solution.

However, one list that has long gone un-scrutinized by the typical IT professional is the published VMware Hardware Compatibility List. A fellow friend of mine in the IT space recently underwent the less than pleasant experience of having the beloved HCL fail him – resulting in the worst kind of IT issue: intermittent complete outages of his VMware hosts. He was hung – no vMotion – the only course of action being to reboot the ESXi host and pray the VM’s survive.

With weeks between host outages, the problem was almost impossible to pinpoint. Through detailed troubleshooting eventually the breadcrumbs led to the 10G Qlogic single port converged network adaptor (CNA). You’ll be as surprised as my friend was to find that this particular card is well documented as “supported” on VMware’s HCL.

Yes! Betrayed by the HCL! Making matters worse is the fact that the card is also fully supported by HP in his new DL385 G7 servers, as well as the Cisco Nexus switch into which it was plugged. While Qlogic is a well established player in the HBA/CNA space, their email only support did not live up to the Qlogic reputation. My friend and his entire team spent countless hours working on the issue with minimal to no support from Qlogic.

Backed into a corner they decided to take a chance on Emulex OCe11102-FX converged adapters, another formidable player in the market. Issues did arise again – but not stability issues: CIM functionality issues. Unlike their competition, Emulex stepped up to the plate and served up a home run. They took the time to recreate his issue in their lab and boiled it down to the order of the CIM software.

OneCommand Manager for VMware vCenter was then installed. Once the Emulex CIM was installed prior to the HP CIM, my friend finally achieved sustained stability and solid CIM functionality. Some lessons that were learned or reinforced by this experience:

  1. Make certain the hardware you are looking to invest in is on the VMware HCL.
  2. Google the specific hardware for reviews and/or comments on the VMware support forums.
  3. Research that the hardware vendor you select offers phone AND email support – not just email support.

Photo Credit: gemtek1

VMware View Client: It’s All Fun And Games Until Someone Can’t Remotely Log On …

By | How To, View, Virtualization, VMware, vSphere | No Comments

…  to their virtual desktop while traveling in Europe.

Why is this issue occurring?

Did you configure the vSphere environment correctly?

Did the View Administrator make a change that you are unaware of?

Where is the documentation binder, if you even have one?

Where should you check first?

Since we are focusing on a remote virtual desktop, let’s trace from the client into the virtual environment … similar to following the OSI model from the Physical Layer up to the Application Layer until the problem is found. The exception is that we are following the issue from outside our network in – reference the diagram below:

[iframe src=”” width=”615″ height=”325″]


In order for our scenario to play out, let’s assume the following:

  1. A virtual machine (VM) has a connection to the virtual network.
  2. A desktop pool has been created with a dedicated desktop for the user.
  3. DNS is functioning properly – forward and reverse in the environment.
  4. SSL is configured correctly.
  5. The user is part of the proper group with appropriate permissions and entitlements.
  6. Networking on the virtual desktop is configured correctly.
  7. There are NO issues with the VM operating system (OS).
  8. Ports are configured properly for your network environment.
  9. PCoIP is configured as the primary remote display protocol and RDP is the secondary.
  10. Both display protocols are functioning properly.

Having confirmed connectivity, ports, protocols and finally that the VM’s are operational, based on the above assumptions – where should we check next?

Jump into VMware View Administrator using your specific URL – https://<view-connection-server-FQDN>/admin. Once you logon, open Inventory|Desktops. In the Filter field enter the name of the user’s assigned virtual desktop and determine if the virtual desktop is in use by someone other than the user. If everything checks out, open vCenter using the vSphere Client. Select Home|VM’s and Templates. Once again, locate the user’s assigned virtual machine desktop. Select the Console tab and determine if you can see the MS Windows desktop or if the screen has been locked by an administrator.

The most overlooked problem as an administrator is forgetting that a console session is viewed as a logged on user, therefore it must be logged out. When viewing the virtual desktop reboot, it is easy to forget that there is a session open. Disconnecting from the console or the VM desktop will eventually lock the current user, thus preventing someone from remotely connecting to their VMware View Client virtual desktop from that trip in Europe.

Certainly after remedying this issue, an experienced administrator would dive into vCenter to check the console for a particular VM desktop – knowing exactly what to look for. What it comes down to is learning how to perform troubleshooting the long way, so that there is a deep understanding of how all the technical components work. This knowledge will lead to more efficient troubleshooting and quicker resolution of issues in the future.

In the end it is about working smarter, not harder.

Photo Credit: VMware

Life’s A Beach With Remote vSphere Management on the iPad

By | View, Virtualization, VMware | No Comments

Leaving for Bali? Vacationing in upper northwest Indiana? Just heading to Grandma’s for the weekend? Then this is the blog post for you!

As your virtual travel guide, here are the four things you need to manage your vSphere environment while on vacation (or take a vacation while managing your vSphere environment).

Before leaving the office, a few things need to be in place…

1. Make sure you’ve downloaded the latest vCMA virtual appliance from VMware Labs:
a) Head to:
b) Install into your infrastructure, and give the appliance an IP address.
2. You will need an iPad with 3G capabilities.
3. VPN connectivity to your private network.
a) Cisco AnyConnect Client for the iPad works great, as shown below:

[image title=”Slide1″ size=”small” align=”center” width=”400″ height=”300″][/image]

b) You can also use the native VPN ability of the iPad.
4. vSphere Client for the iPad.

Once you’ve gotten to your destination of choice, follow these steps to gain access to your vSphere environment:

1. Go to iPad Settings >>Apps >>vSphere Client.
a) Set the Web Server to the IP address of the vCMA appliance.
2. Establish VPN connectivity.
3. Launch the vSphere Client and log into VCenter, as seen on the initial login screen:

[image title=”Slide2″ size=”small” align=”center” width=”400″ height=”300″][/image]

After entering you should now see the summary screen of your VCenter environment:

[image title=”Slide3″ size=”small” align=”center” width=”400″ height=”300″][/image]

From the summary screen you can drill into your ESX servers and be able to do the following:

• View ESX Server CPU, memory, disk & network load.
• View ESX Server Hardware summary and performance:

[image title=”Slide4″ size=”small” align=”center” width=”400″ height=”300″][/image]

• Inventory of the VM’s on the server.
• From this page you can reboot your ESX Server or enter Maintenance Mode.

From the ESX server screen you can drill into the VM:

[image title=”Slide5″ size=”small” align=”center” width=”400″ height=”300″][/image]

Within this screen you will be able to do the following:

• View VM Server CPU, memory, and disk load.
• View VM and the latest VM events.
• View & restore any snapshots associated to the VM.
• You can also Start, Stop, Restart and Suspend the VM.

I’ve only tested this scenario from the beach, but I’m sure it works on the golf course too.

Photo Credit: skylerf

Liquidware Labs ProfileUnity Is #Winning As VMware View’s Profile Management Solution

By | View, Virtualization, VMware | No Comments

There has been much fanfare surrounding the current VDI offering from VMware View 4.6, and for good reason. It has been praised by Gartner as being ready for the enterprise and named by eWeek as one of the products of the year, alongside the iPad.

With the release of View 4.5, VMware is making the case that it should be the enterprise desktop and application delivery platform of the future. (Gartner)

However, it continues to be a growing product, and even with 4.6 and VMware’s acquisition of RTO, there is still not a solution from VMware around profile management. VMware has recommended Liquidware Labs ProfileUnity as its profile management solution to customers. VMware is listening to its customer base and critics who have long rallied for licensing costs to be brought down, and is even going so far as to subsidize some of the licensing costs around a ProfileUnity purchase.

I’m recognizing VMware View 4.5 as a “best of” product in 2010, but I expect even more progress in the coming year. Among other factors, license costs need to continue their downward trend to bring overall acquisition costs below those associated with traditional desktop systems. (eWeek)

ProfileUnity is a great step in the right direction for profile management, both on physical and virtual desktops. It provides the administrator with a central point of management of user profiles at a very granular level. Those of you who have worked with Microsoft’s Roaming profiles know that moving around a user’s entire profile can be a time consuming issue. ProfileUnity steps in and allows you to move only the bits you need.

With ProfileUnity 4.8 gearing up for general release soon, I can comfortably say that the most anticipated feature is full Office 2010 support—specifically regarding creation of MAPI profiles.

Currently the only path for Office 2010 users is to migrate existing MAPI profiles from previous versions of Office to Office 2010. With 4.8, users will now be able to leverage the profile creation mechanism of ProfileUnity. It allows the administrator to filter users based on security groups and direct them to their specific Exchange server. It then uses variables such as SAM account name to specify mailbox name. This is a beautiful way to seamlessly script MAPI profile creation.

Photo Credit: jeffk

VMware View Client for iPad Finally Released

By | Cloud Computing, View, Virtualization, VMware | No Comments

The VMware View Client for iPad is now available on the app store.

This long-awaited release rides the coattails of View 4.6’s recent release. It supports the Apple Bluetooth keyboard and mouse, which are sold seperately from the iPad, as well as the Apple video ouput dongle. Both of these make iPad+View an even more valid replacement than traditional desktop replacement. George Shiffler, a Gartner market research director stated last week:

“We expect growing consumer enthusiasm for mobile PC alternatives, such as the iPad and other media tablets, to dramatically slow home mobile PC sales, especially in mature markets.”

While there is no word yet on compatibility with the iPad 2, it would be reasonable to assume that it will be.  Additionally, revisions to the View Client for iPad will be more frequent now that it has been released. Click below for a demo from VMware.

Oh, and the best part: it’s free 🙂

Photo Credit:Fausa

Two Small Steps For VMware View 4.6, One Huge Step Towards iPad Client

By | Cloud Computing, View | No Comments

Last week VMware released its highly anticpated VMware View 4.6, leaving VMware View 4.5’s user friendly set up and easy to use interface in the rearview. Yet there was one important feature which didn’t make the cut to version 4.6: PCoiP desktop connections through the Security Server.

VMware View 4.5 introduced this feature to the table and provided what I recognize many of my customers still need. This feature has opened up another step toward true device independence and user mobility for the End User computing . PCoiP through the security server eliminates dependency on your traditional corporate VPN solution as well as the dependency on VPN clients, etc.

Without PCoiP through the security server, there is limited capability using RDP as the display protocol, which does not offer the best user experience, or users can opt to use additional infrastructure outside of what VMware View offers as part of it’s package.

What else does this mean? Drum roll please…the Apple iPad Client!!

I believe this feature will be the lead in to the VMware developed and much anticipated release of their iPad client. From the information I have garnered, the general public is weeks away from the Ipad client release and availability for download from the AppStore.

During PTAB at VMworld last fall, Ihad the opportunity to play with the iPad client and was truly blown away.  Warren Ponder from VMware was also showing it off at the VMworld booth, if you haven’t viewed yet, check it out here. I have been anxiously awaiting iPad Client since that day and at one point even went as far as trying to be a part of Betas and attain pre-release code.

I have had to continue using the Wyse PocketCloud in the meantime, but the VMware version puts the Wyse offering to shame due to Wyse’s dependency on RDP.  VMware’s iPad will provide PCoiP support and a rich desktop experience no matter where or what device the user is or on.

While being able to use the iPad will open up many possibilities for end users, the problems associated without PCoiP through the security server will still be prevalent. Yet, optimizing the iPad client with the VMware View Security Server will make life a lot easier for users.

[framed_box width=”583″ height=”207″ bgColor=”#711200″ textColor=”#ffffff” rounded=”true”] An overview of other features included in VMware View 4.6, this is a minor release that includes:

  1. Support for secure PCoIP tunneling
  2. Over 160 bug fixes
  3. Improvements in using Windows 7 SP1 RC as a remote desktop OS
  4. Better keyboard mapping support
  5. Enhanced USB device compatibility

This minor  release will be available as a free upgrade to customers with a currently active VMware View support and subscription contract. Please refer to the release notes for vSphere and View compatibility guidelines.

The Upside of Storage Overcommit in VMware View 4.5 (And the Downside of Monitoring It)

By | View, VMware | No Comments

Storage overcommit is one of the strongest selling points of VMware View. It is available in vSphere, and it allows you to overcommit storage of Linked Clones, which in turn allows for more desktops to be stored on a datastore allocated to View.

The caveat of this architecture is that the administrator must be diligent in his or her oversight of the datastores which contain the Linked Clone desktops.  Left unattended, Linked Clones can grow, filling up the datastore. This leads to downtime of any VM on the affected datastore, which is not good for uptimes and definitely not fun to clean up.

Recently, I did a deep dive into the algorithms of VMware View’s storage overcomit—and even more interesting, the actual behavior of View once this commitment level is reached.  The behavior of the View Connection Manager/View Composer isn’t very well documented by VMware (yet) in relation to how they behave when faced with a Desktop Pool that violates the predetermined overcommit levels.

Let’s do a quick level set, then get into what I found:

1) Storage overcommit applies to Linked Clones.

2) Storage overcommit levels are set during pool creation.

3) There are four levels of overcommit: None, Conservative, Moderate, and Aggressive.  These may sound ambiguous, but they are documented by VMware on page 89 of the View 4.5 Administrators guide.  The overcommit levels are None, 4x, 7x, and 15x respectively.

So what happens if you create a desktop pool which violates the constraints?

One might think that doing so would raise a red flag in the Desktop Pool Creation Wizard and stop you, but it does not.  The View administrator console only passes instructions over to vCenter, which in turn leverages Composer to create linked clone desktops. Thus  the View administrator console will actually pass the instructions to provision all of the desktops. Composer will create the desktops up until the point of ~80% of the datastore size and then stop.  This will then generate an error in the View administrator console, visible at the pool level that states “%Timestamp%: None of the specified datastores is accessible from connected hosts.”

This issue has been reported to VMware and they reported that it will be clarified in future versions of the documentation, as well as the behavior inside of the Administrator console.