On this day two years ago, I wrote an article titled "Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll solve the problems to get there." That original article now has 19,000 page views and 48 comments, and a mid-point update I wrote in June 2009 has 6,000 views with 20 comments. So since it's obvious that people are interested in this topic, and since today is the two year 2010 "deadline" I laid out in the original article, let's see how I did!
In case you don’t remember the original 2008 article, my basic point was that while VDI didn’t make sense for mainstream users in June 2008, the technical limitations would be solved within two years. As such, VDI would then become an option for “mainstream” desktops by June 2010. (As a point of clarification, I didn’t suggest that VDI use was going to “explode” or even become common by June 2010, rather, I suggested that the technical components allowing this to happen would be in place by 2010.)
The 2008 prediction was based on four technical components that needed to be in place:
- Single disk image for many users
- Remote display protocols that are indistinguishable from local
- Local / offline VDI
- Broader compatibility for app virtualization
(For more details about what I meant for each of these, read the original article.) In order to figure out how far along we are towards “wholesale desktop replacement,” let’s go through these four predictions one-by-one and look at where we are today.
1. Single disk image for many users
This was about the ability for many users to share a single disk image. My feeling at the time (and today too) is that except for niche cases, the management cost savings associated with VDI is heavily dependent on the ability for many users to share a single disk image. This essentially means that IT only has to maintain a single image instead of a personal image for each user.
Back in 2008 I only knew about Citrix Provisioning Server. Since then I learned about the existence of similar products (DoubleTake Flex and Wyse Streaming Manager). Also since then just about every storage vendor has some kind of ability to do shared images. VMware has introduced thin provisioning and linked clones which are now tightly integrated into the VMware View product, and Quest has integrated cloning and differential snapshotting into vWorkspace.
We've also seen an explosion of other vendors offering different takes on virtual desktop disk image management. Atlantis Computing has been shipping for over a year, and Unidesk launched their first product this past Monday. MokaFive and Virtual Computer both have layering they make available to their VMs, and Wanova offers a similar capability for bare metal (including offline) client devices.
Based on all this, I can happily say that the ability for a single disk image to be shared by multiple users, as I defined it in 2008, is solved.
2. Remote display protocols that are indistinguishable from local
Remote display protocols are always a compromise between user experience, bandwidth, and processing power. When I wished for improvements to remote protocols in 2008, I wasn't suggesting that we do the impossible with perfect remoting over slow connections. I just meant that the protocol itself should not be a limiting factor.
And here too we've made great progress since June 2008. Back then the only really high quality protocols were Qumranet Spice, HP RGS, and the hardware-bound PCoIP. RDP and ICA were there, but each ran into severe limitations when used as everyday full desktop replacements.
But what a difference two years makes! Citrix has made hundreds of improvements and tweaks to ICA (now called HDX) specifically geared towards full desktop users. VMware has released a software-based implementation of PCoIP which is built-in to their View product. Microsoft has released RDP 7 which supports multiple displays and Aero Glass (though not at the same time), and RemoteFX is around the corner. Smaller companies have joined in too, with Wyse releasing VDA and updating TCX, and Quest released an "Xtream" (their word) version of EOP.
Can we say that remote protocols are indistinguishable from local? That might be a stretch, but I think it is fair to say that in 2010, the remoting protocols are no longer show-stoppers for users. Given the right connection, most people would be perfectly happy working via a remote protocol day-in, day-out. (I've personally done this off-and-on over the past few years, often working exclusively via a remote protocol for months at a time.) I think we can call this limitation (again as I defined it in 2008) "solved."
3. Local / offline VDI
This was the ability to run a virtual machine locally on a client device, thereby removing the need for a remote display protocol altogether and "solving" the offline problem.
A solution to this didn't really exist in 2008 (save for maybe a manually-managed VMware ACE or Workstation implementation). In 2010 we have a completely different situation though.
Virtual Computer is out there now with their client hypervisor. MokaFive has a Type 2 client-based solution that's been available for a few years, and at BriForum they announced a bare metal-like solution that hides a thin Linux kernel under their VM, Virtual Bridges also has a similar Type 1-like client environment. We also have Wanova who has an offline / local solution that runs on bare metal and doesn't require a hypervisor.
Even though we're still waiting for bare metal solutions from Citrix and VMware, there are plenty of folks who run their managed desktops via one of these solutions today, and I think we can also call this solved.
4. Broader compatibility for app virtualization
In 2008 I wrote that in order for the whole "shared image" concept to work (as outlined in Point #1 above), we needed 100% application virtualization compatibility so that we could use ANY application in our managed desktops.
When I wrote my mid-point update article last year, I shared that my views on app virtualization compatibility had changed since the original article in 2008. In 2008 I was focused on compatibility levels. (i.e. if an app virtualization solution supported 96% of the world's apps in 2008, I wanted to see it support 100% of the world's apps by 2010.) But now I realize that's not realistic. Instead we have to figure out how we can offer a blend of application delivery solutions that will support 100% of our apps in shared disk image environments. We'll achieve 100% through a combination of app virtualization, native app installation into master disk images, seamless app delivery from terminal servers and VDI-hosted apps, and good old fashioned on-demand MSI installs.
I've also realized that in order cover every use case with desktop virtualization, we need a way to handle user-installed apps. Back in 2008 and 2009 I was really focused on how the various app virtualization or user environment management products would handle user-installed apps, but since then I've realized we can also handle this today with techniques such as "side-by-side" VMs. (One personal and one shared.)
Can we call this solved now? Even though today's app virtualization products don't offer 100% compatibility, we have figured out other ways to get that compatibility. So yes, this is solved.
So we're 4 out of 4. But are we successful?
If you look at the specific questions I asked in 2008, and if you agree with my logic in today's article, then yes, we can classify all four of these issues as "solved." But the bigger statement I was defending was that VDI would be ready for "wholesale desktop replacement" by June 23, 2010. Do you think we're there today? No way!
So how's it possible that we can we say "yes" to 4 out of 4 but still fail on the central question? I guess now we see the importance of asking the right question!
In this case we have a situation where the sum is less than all the parts. Each of the components that make up a "wholesale desktop replacement" solution exist, but there's no way (today) to combine them all into a single perfect solution. That's not to say that all's lost, however, it just means that (for the time being) we're going to have to continue to buy point products that will solve specific pains. But let's save the date for June 23, 2011 to check back and see where we are. (By the way, if you want to look a few years out, I recently published my 2015 desktop vision. I also did a session with Chetan Venkatesh at BriForum last week about his 2016 vision which I wrote about yesterday.)
What have I learned since 2008?
I mentioned that I didn't really ask the right questions back in 2008. (Well, at the time they were fine, but looking back now it's clear that I've learned a lot in the past two years.) Specific issues that I didn't think about back then include:
- Now I view "VDI" as just datacenter-hosted virtual desktops. Once you start bringing in client-based VMs, I'm now calling the larger technology "desktop virtualization."
- Client-based VMs don't have to wait until Type 1. We can do great things now with Type 2 (and with Type 2 environments that feel like Type 1).
- Remoting protocols are great, but there's a fundamental challenge around high-bandwidth peripherals (USB video cameras, etc.) that will never be solved.
- The mechanics of multiple users sharing a single disk image are easy. The difficulty lies in the logistical balancing of admin changes and user changes.
- For client-based VMs, synchronization between the client and the master image is critical
- It's cool that the twenty like or whatever in this article are mostly back to stuff we've actually written about over the past two years. It's cool to be able to have such an active role in the development of this whole concept!
The bottom line I guess is that I was wrong on June 23, 2008. But here's to counting down until June 23, 2011!