My June 2010 VDI prediction deadline is here. Did it come true?

On this day two years ago, I wrote an article titled "Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll solve the problems to get there.

On this day two years ago, I wrote an article titled "Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll solve the problems to get there." That original article now has 19,000 page views and 48 comments, and a mid-point update I wrote in June 2009 has 6,000 views with 20 comments. So since it's obvious that people are interested in this topic, and since today is the two year 2010 "deadline" I laid out in the original article, let's see how I did!

In case you don’t remember the original 2008 article, my basic point was that while VDI didn’t make sense for mainstream users in June 2008, the technical limitations would be solved within two years. As such, VDI would then become an option for “mainstream” desktops by June 2010. (As a point of clarification, I didn’t suggest that VDI use was going to “explode” or even become common by June 2010, rather, I suggested that the technical components allowing this to happen would be in place by 2010.)

The 2008 prediction was based on four technical components that needed to be in place:

  • Single disk image for many users
  • Remote display protocols that are indistinguishable from local
  • Local / offline VDI
  • Broader compatibility for app virtualization

(For more details about what I meant for each of these, read the original article.) In order to figure out how far along we are towards “wholesale desktop replacement,” let’s go through these four predictions one-by-one and look at where we are today.

1. Single disk image for many users

This was about the ability for many users to share a single disk image. My feeling at the time (and today too) is that except for niche cases, the management cost savings associated with VDI is heavily dependent on the ability for many users to share a single disk image. This essentially means that IT only has to maintain a single image instead of a personal image for each user.

Back in 2008 I only knew about Citrix Provisioning Server. Since then I learned about the existence of similar products (DoubleTake Flex and Wyse Streaming Manager). Also since then just about every storage vendor has some kind of ability to do shared images. VMware has introduced thin provisioning and linked clones which are now tightly integrated into the VMware View product, and Quest has integrated cloning and differential snapshotting into vWorkspace.

We've also seen an explosion of other vendors offering different takes on virtual desktop disk image management. Atlantis Computing has been shipping for over a year, and Unidesk launched their first product this past Monday. MokaFive and Virtual Computer both have layering they make available to their VMs, and Wanova offers a similar capability for bare metal (including offline) client devices.

Based on all this, I can happily say that the ability for a single disk image to be shared by multiple users, as I defined it in 2008, is solved.

2. Remote display protocols that are indistinguishable from local

Remote display protocols are always a compromise between user experience, bandwidth, and processing power. When I wished for improvements to remote protocols in 2008, I wasn't suggesting that we do the impossible with perfect remoting over slow connections. I just meant that the protocol itself should not be a limiting factor.

And here too we've made great progress since June 2008. Back then the only really high quality protocols were Qumranet Spice, HP RGS, and the hardware-bound PCoIP. RDP and ICA were there, but each ran into severe limitations when used as everyday full desktop replacements.

But what a difference two years makes! Citrix has made hundreds of improvements and tweaks to ICA (now called HDX) specifically geared towards full desktop users. VMware has released a software-based implementation of PCoIP which is built-in to their View product. Microsoft has released RDP 7 which supports multiple displays and Aero Glass (though not at the same time), and RemoteFX is around the corner. Smaller companies have joined in too, with Wyse releasing VDA and updating TCX, and Quest released an "Xtream" (their word) version of EOP.

Can we say that remote protocols are indistinguishable from local? That might be a stretch, but I think it is fair to say that in 2010, the remoting protocols are no longer show-stoppers for users. Given the right connection, most people would be perfectly happy working via a remote protocol day-in, day-out. (I've personally done this off-and-on over the past few years, often working exclusively via a remote protocol for months at a time.) I think we can call this limitation (again as I defined it in 2008) "solved."

3. Local / offline VDI

This was the ability to run a virtual machine locally on a client device, thereby removing the need for a remote display protocol altogether and "solving" the offline problem.

A solution to this didn't really exist in 2008 (save for maybe a manually-managed VMware ACE or Workstation implementation). In 2010 we have a completely different situation though.

Virtual Computer is out there now with their client hypervisor. MokaFive has a Type 2 client-based solution that's been available for a few years, and at BriForum they announced a bare metal-like solution that hides a thin Linux kernel under their VM, Virtual Bridges also has a similar Type 1-like client environment. We also have Wanova who has an offline / local solution that runs on bare metal and doesn't require a hypervisor.

Even though we're still waiting for bare metal solutions from Citrix and VMware, there are plenty of folks who run their managed desktops via one of these solutions today, and I think we can also call this solved.

4. Broader compatibility for app virtualization

In 2008 I wrote that in order for the whole "shared image" concept to work (as outlined in Point #1 above), we needed 100% application virtualization compatibility so that we could use ANY application in our managed desktops.

When I wrote my mid-point update article last year, I shared that my views on app virtualization compatibility had changed since the original article in 2008. In 2008 I was focused on compatibility levels. (i.e. if an app virtualization solution supported 96% of the world's apps in 2008, I wanted to see it support 100% of the world's apps by 2010.) But now I realize that's not realistic. Instead we have to figure out how we can offer a blend of application delivery solutions that will support 100% of our apps in shared disk image environments. We'll achieve 100% through a combination of app virtualization, native app installation into master disk images, seamless app delivery from terminal servers and VDI-hosted apps, and good old fashioned on-demand MSI installs.

I've also realized that in order cover every use case with desktop virtualization, we need a way to handle user-installed apps. Back in 2008 and 2009 I was really focused on how the various app virtualization or user environment management products would handle user-installed apps, but since then I've realized we can also handle this today with techniques such as "side-by-side" VMs. (One personal and one shared.)

Can we call this solved now? Even though today's app virtualization products don't offer 100% compatibility, we have figured out other ways to get that compatibility. So yes, this is solved.

So we're 4 out of 4. But are we successful?

If you look at the specific questions I asked in 2008, and if you agree with my logic in today's article, then yes, we can classify all four of these issues as "solved." But the bigger statement I was defending was that VDI would be ready for "wholesale desktop replacement" by June 23, 2010. Do you think we're there today? No way!

So how's it possible that we can we say "yes" to 4 out of 4 but still fail on the central question? I guess now we see the importance of asking the right question!

In this case we have a situation where the sum is less than all the parts. Each of the components that make up a "wholesale desktop replacement" solution exist, but there's no way (today) to combine them all into a single perfect solution. That's not to say that all's lost, however, it just means that (for the time being) we're going to have to continue to buy point products that will solve specific pains. But let's save the date for June 23, 2011 to check back and see where we are. (By the way, if you want to look a few years out, I recently published my 2015 desktop vision. I also did a session with Chetan Venkatesh at BriForum last week about his 2016 vision which I wrote about yesterday.)

What have I learned since 2008?

I mentioned that I didn't really ask the right questions back in 2008. (Well, at the time they were fine, but looking back now it's clear that I've learned a lot in the past two years.) Specific issues that I didn't think about back then include:

  • Now I view "VDI" as just datacenter-hosted virtual desktops. Once you start bringing in client-based VMs, I'm now calling the larger technology "desktop virtualization."
  • Client-based VMs don't have to wait until Type 1. We can do great things now with Type 2 (and with Type 2 environments that feel like Type 1).
  • Remoting protocols are great, but there's a fundamental challenge around high-bandwidth peripherals (USB video cameras, etc.) that will never be solved.
  • The mechanics of multiple users sharing a single disk image are easy. The difficulty lies in the logistical balancing of admin changes and user changes.
  • For client-based VMs, synchronization between the client and the master image is critical
  • It's cool that the twenty like or whatever in this article are mostly back to stuff we've actually written about over the past two years. It's cool to be able to have such an active role in the development of this whole concept!

The bottom line I guess is that I was wrong on June 23, 2008. But here's to counting down until June 23, 2011!

Join the conversation

12 comments

Send me notifications when other members comment.

Please create a username to comment.

Well personally I would not say these 4 are completely solved. They are addressed for sure by several vendors/solutions. There is a huge difference between 'addressed' and 'solved'. For example offline access like XenClient is not ready for prime time and they still have a LONG way to go. Typical example of something that is 'addressed' but not completely 'solved'.


And as you mentioned, a VDI 'solution' as of today is really a bunch of different products/solutions put together in order to achieve some sort of 'usable' state (what most people do not realize may also put them in an 'unsupported' situation).


There are still issues with pretty much every single point you mention Brian.


VDI will get there I am certain but it will take way more than all these vendors together to make it happen. Until Microsoft decides to step up and change its Desktop offerings to this new reality, I do think things will keep moving at a much slower pace.


Let's see what we get by 2015. :-)


Cancel

Great article Brian.  It is always fun to look back at what we think will occur, and really gratifying when we get it right!  I'll toss in one thing though -  Shared single disk images for multiple users was already done at the point you brought it up.  


Here is a demo of the technology used to do it, posted a few months after it shook up VMworld 2007.  


www.youtube.com/watch


It is now known as Single File FlexClone and was known by its engineering moniker 'sis-clone' at the time.


Great post - fun to read!


Cancel

Claudio - I completely agree. Cobbling together a bunch of components doesn't necessarily make for a good user or admin experience.


I also wanted to clarify Brian's comment on where Wanova fits in this discussion. After talking with many attendees at Briforum last week, it seems there's a perception that Wanova is either an offline VDI solution, or else client-side, hypervisor-dependent desktop virtualization. We are neither. Instead, we're addressing desktop management from within the OS. Interestingly, we solve (or negate the need for) every item on Brian's list - as well as some things he doesn't mention that may curtail VDI adoption -- such as an organization's willingness to fundamentally change its infrastructure or ante up the $$$ associated with such a change, (Blog post at wanova.com/blog/ if you're interested).


Cancel

I feel like a broken record her but what the heck.


10% of my enterprise is on VDI running persistent desktops.  we can afford to pay for lots of disk for performance and storage due to security requirements.


The rest are running laptops and desktops.  We use a combination of SCCM and another leading vendor to manage our Laptops and desktops in conjunction with a  tool to better manage profiles and environmental settings. Oh and some app virt.  Everything is locked down so we can't target improving efficiencies with new tools to manage desktops.  Our helpdesk tickets for non hardware related failures is less than 1%.  It would cost us more to rip out our existing SCCM deployment.  Roll out a new solution. Then train staff to manage new tools than we could possibly save anyway.


In brief there is only one problem we have.  We want to deploy more VDI but the other 60% of the users who are candidates do not pose a security risk, are not high end developers, or senior execs who don't care how much we spend on a solution for them.


In my perfect world VDI will be ready for prime time when:


1. AV, patching, and Helpdesk tools work the same way as they do for laptops and desktops.


2. It costs less to run persistent images.  It's hard to compare a $40 HDD to a 300GB FC disk carved up between 4 users.


Cancel

Agree with Watson.  The elephant in the room hindering mainstream adpotion comes down to cost.  If one has a specialized needs case , perhaps it can be justified. But for mainstream adoption, it just cost  too much vs PC's.  


I would argue that even if the softare from all the mentioned vendors such as Citrix, VMWare, etc were free (and it is far from it)  it would still cost too much for mainstream adoption for a Microsoft based VDI solution due to their annual licensing and SA requirements.  The complexity and deployment costs in time and services just adds to this.


I believe that the bigger developments over the past 2 years than those mentioned in the article  that will enable widespread adoption have been in the area of Linux Desktops, Shared Computing/Zero clients, IPhone/iPad adoption  and other such technologies that are driving the down the cost and changing the concept of the desktop.


Cancel

storagepro - Can you explain how single flex clone is going to reduce storage costs?


Cancel

As far as predictions go, there are a couple of things about this that are pretty encouraging:


(A) I think many would agree that the major requirements you cited all remain critical stepping stones to broad scale adoption of desktop virtualization. (Not often the case with forward-looking predictions like this.)


(B) Given that a major curveball was thrown a few months after your initial predictions--namely the economy taking a nose dive and hitting the VDI trailblazers in the financial services vertical hardest of all--you (and the industry) didn't miss by much!


I have no doubt at all that between further advances by larger vendors like Citrix, as well as smaller vendors like us innovating and striking the right partnerships to cover more than our respective "sweet spots," by this time next year there will be several unified solutions in the market that very capably span all of these critical areas.


In the meantime, I don't think that the technology options being a bit fractured is necessarily a terrible thing.  I think part of what has held back adoption is that vendors have overcomplicated matters by forcing too dramatic of a change (and too big of a price tag) too fast.  I believe that organizations would be much better served by picking a solution that can address a real need today while at the same time having confidence that their vendor of choice has both a vision for something bigger and a track record of delivering against their promises.


I think this kind of maps to what Watson and Edgar are saying.  Rather than looking at desktop virtualization as a new world order that will solve a universal set of problems, let's look at it as a toolbox that can address different areas of need and then serve it up in small enough pieces where folks can try it at small scale and prove that it is a material improvement over current tools and practices.


Doug Lane


Virtual Computer


Cancel

1. Single disk image for many users


Yeah, but that's Terminal Server.


2. Remote display protocols that are indistinguishable from local


Agree to a point. A lot have been done lately but there are still sore points. I'm  thinking aboout true peripheral support.


3. Local / offline VDI


Not there. But I do agree to the distinction between "VDI" and broader "Desktop Virtualization"


4. Broader compatibility for app virtualization


Things evolved, but nothing mayor.


5. - ?


I largerly agree with the bunch of comments here.


Happy midsummer all


Cancel

@watson Single File FlexClone is a NetApp technology that deploys a number of desktop images from a single file (VMDK, not Provisioning Server vDisk) The only data that is written is the difference in user data.  Take your example of a 300GB FC disk and instead of carving it for 4 users, you can squeeze 40 users out of it.


Realistically though, in order for it to scale and be effective you have to lock down the environment, folder/profile redirection, etc so at that point you might as well just consider TS.


Personally its a nice parlor trick to show that you can deploy 2000 desktop in 5 minutes but its not really easy to manage.


Cancel

@Tony - thanks that's what I thought.  I tried similar technology and it worked okay too.  The problem I had is it did nothing for IOPS reduction. So my disk requirements were unchanged.


Cancel

I agree with Brian, that the technical issues are largely resolved for VDI to become mainstream as a desktop replacement solution. I also agree with Watson the thing holding VDI back is purely cost. Now it is just too expensive and the business cases are too squishy full of soft savings and IT cost takeout requirements. When, and I believe this to be a when, persistent desktop based VDI is capex neutral with the acquisition of a PC, aka $400-500s or supplied out of some sort of desktop cloud for similar price points, VDI will be adopted in large scale.


Cancel

All comments here are spot on in respect of the available technology and also up front investments required. From experience (purely in the UK) companies seem far more hesitant to deploy a desktop virtualisation strategy, purely as the project touches every end users personal space and the access device they need for their day to day role. Even within the most solid of deployments and right technology selection, something as simple as changes in login screens or desktop personalisation can cause headache through increased service desk calls expecially on those planning to doing it by the thousand (few and far between...yet)


Server & datacentre consolidation is transparent to the end user who should have no vision IT are doing smart things consolidating and virtualising the flasing lights at the other end of the network, however desktop virtualisation is the complete opposite.


Technically every tick in the box is now there including a somewhat fluffy ROI, however I think builidng a defined process to adopt the technology is needed to get the  out of the "noise" to mainstream adoption stage which is getting closer by the day...


Would be great to get your 2012 prediction Brian...!


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close