My 2008 prediction about VDI being ready in 2010: a mid-point status update

It's been exactly one year since I wrote "Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll solve the problems to get there.

It's been exactly one year since I wrote “Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll solve the problems to get there.” Since that whole article was me making predictions about how things would be in two years, I thought it would be cool to do a half-way point “check up” to see whether I have any chance of being right when the June 2010 deadline hits.

In case you don’t remember last year’s article, my basic point was that while VDI didn’t make sense for mainstream users in June 2008, the technical limitations would be solved within two years. As such, VDI would be an option for “mainstream” desktops by June 2010. (As a point of clarification, I didn’t suggest that VDI use was going to “explode” or even become mainstream by June 2010. I just suggested that all the technical components allowing this to happen would be in place by 2010.)

Last year’s prediction was based on four requirements

When I wrote that VDI would be ready in two years, I centered my prediction around four key technical capabilities that were needed:

  1. Single disk image for many users
  2. Remote display protocols that are indistinguishable from local
  3. Local / offline VDI
  4. Broader compatibility for app virtualization

(For more details about what I meant for each of these, read the original article.)

Where are these requirements today?

Now in order to figure out how far along we are towards “wholesale desktop replacement,” let’s go through these four predictions one-by-one and look at where we are today.

1. Single disk image for many users

This was about the ability for many users to share a single disk image. As of last year, I think the only option for doing this was Citrix Provisioning Server, although it didn't work offline.

What’s changed since June 2008?

There have been a lot of changes in the past year around the concept of the single disk image. First of all, VMware released their Scalable Virtual Images (SVI) as “linked clones” and hooked them into View via View Composer. Citrix Provisioning Server now works offline. We also have storage companies (like NetApp) who can do flex cloning on the volume, folder, or file level. And let’s not forget Atlantis Computing whose new ILIO product looks really strong here.

Are we will on track for this to be “solved” by June 2010?


2. Remote display protocols that are indistinguishable from local

What’s changed since June 2008?

Remote display protocols are always a compromise between user experience and bandwidth. When I called for improvements in remote protocols last year, I wasn't suggesting that we do the impossible with perfect remoting over slow connections. I just meant that the protocol itself should not be a limiting factor.

And here too we've made great progress since June 2009. Back then the only really high quality protocols were PC-over-IP and Qumranet's Spice. RDP and ICA were there, but they each had a lot of problems with "every day" apps that users would encounter when connecting to full desktops. (No ability to support Flash video in a way that was actually useful.)

What a huge difference a year makes! Citrix has added all sorts of capabilities to ICA when running on XenDesktop (including Flash remoting). Microsoft's RDP 7 looks like it's going to be hugely popular and promises some pretty amazing performance, complete with out-of-the-box Aero glass remoting and multimedia redirection support. Teradici released a version of PC-over-IP that will work on the WAN, and they're working with VMware to create a software-only implementation.

Even smaller companies are getting involved. Wyse updated TCX and Quest updated EOP. All-in-all it was a great year for remote display protocols!

Are we will on track for this to be “solved” by June 2010?


3. Local / offline VDI

This was the ability to run a virtual machine locally on a client device, thereby removing the need for a remote display protocol altogether and "solving" the offline problem.

What’s changed since June 2008?

Again, quite a bit has changed here! The biggest is the fact that the term “client hypervisor” wasn’t really known last year. But in June 2009 there are two shipping today (Virtual Computer and Neocleus) with VMware and Citrix promising to ship theirs by the end of 2009. Each of these client hypervisor environments will handle client-to-server synchronization, encryption, etc.

Are we will on track for this to be “solved” by June 2010?


4. Broader compatibility for app virtualization

Last year I wrote that in order for the whole "shared image" concept to work, we needed 100% application virtualization compatibility so that we could use ANY application in our managed desktops.

What’s changed since June 2008?

The biggest change here is that I have shifted my focus on how I think about application virtualization. In 2008, I was focused on compatibility levels. (i.e. if an app virtualization solution supported 96% of all the world's apps in 2008, I wanted to see it support 98% of all the world's apps in 2009.) However, I'm starting to realize this might not be realistic.

That said, I also realized that the app virtualization tools need to evolve how they're used. Instead of just simply focusing on admins distributing apps to users, we also need to be able to support the concept of "user installed apps." And I think that might actually be a bit more important that broader app compatibility. Fortunately there’s a lot of progress being made. Just yesterday Mokafive announced a v2 product that supports both user-installed apps and disk image "layering." On top of that, a start-up company called Viewfinity is doing interesting things here. And Atlantis Computing. And AppSense. And InstallFree. So really there's a lot going on here, albeit in a slightly different way than I thought.

Are we will on track for this to be “solved” by June 2010?

Yes. (Via a combination of “user installed apps” and “layering”)

Bottom line: We’re 100% on track

Taking all four of these technical capabilities into consideration, I think we're in pretty good shape for a June 2010 landing. The only overall comment I have is that I think all four of these technical capabilities working together will be the true "win" that makes this all possible. So instead of checking off our list, 1-2-3-4, imagine what could happen if we combined each of these things in different ways. For example:

  • Increasing application compatibility is really a combination of #1 (Disk image management with layering) and #4, user-installed apps.
  • Remote display protocols (#2) will work well via he LAN, but WAN environments may need the applicaiton running locally, say, in a client hypervisor (#3)
  • Client hypervisors (#3) can be used to hide / deliver apps that cannot easily be virtualized (#4)

Evolution of VDI –> “Desktop Virtualization”

There’s one more glaring thing I got wrong in last year’s article, namely, I referred to everything as “VDI.” At the time I used the term generically to describe the overall concept, but now I'm much more careful to use the term "desktop virtualization" to cover the whole space. "VDI," on the other hand, is only the specific type of desktop virtualization where you use a remote display protocol to connect to a Windows desktop OS running in a VM. Other flavors of desktop virtualization include client hypervisors, terminal services, OS streaming, etc. So if I were writing this today, I’d probably not quite refer to things like I did last year.

So what do you think? Is everything going to be ready next June? What capabilities are we missing here?

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

The two things which are really missing are a sensible licensing modal from both microsoft and application vendors and for all the different delivery methods being officially supported by vendors.


you can also add profile/personnality/data management in order to leverage the multi-layer system  and retreive our "own" system accross multiple solution...


user installed apps?

and Scense.....

Beginning of this year we launched Scense 6.0 with Easy Delivery.

Check the first movie at

This will show you users install their own apps within a controlled environment\manner


I highly doubt this will be ready for mass adoption by 2010. More like 2011 at best. Here's why I think so:

The economy. Nobody today is implementing VDI for cost savings upfront, and the capital budget is not there to support it. Even when it is, the cost of SAN storage the greatest hype being told to us by the vendors will make it very hard to sell.

All the emerging approaches like layering etc, will take a long time to mature and for people to understand and adopt. Take a simple example. If you really believe in layering ask yourself this question. If I could provision faster would you reclone your current PC's say every quarter in a large enterprise? I am willing to bet that most people will find their organizations would bark at such a suggestion, and would do really poorly at implementation. Why? The truth is most people have no clue what their inventory is, and really just deal with things on a day to day basis. I call it lazy admins syndrome. Therefore I have no confidence in people's ability to adopt this technology for a while.

So if we go back to the SAN model, no small shop is going to do anything that complex, it's too expensive and stupid to use SAN period. Sure these folks therefore may start to consider cloud based offerings. Makes sense, but I don't buy that by 2010 we will solve all the trust issues resolved with the cloud and private data. So where does that leave us? Local disk implementations of hosted desktops, with static system management tools, to enable centralized management and session mobility. That's the value for now that makes sense for some people.

In time as mgmt matures more people will come. Add offline, client side stuff, etc, and there are niche use cases. Moka 5 has the most complete offering that can be used NOW, check out 2.0 it's cool. Forget Type 1 for a while, nobody is going to do this for some time. It's too hardware dependent, and Neoclues is nothing more than a secure VM shell with no ability to layer inside the guest. Virtual Computer is doing some cool stuff inside the VM but married to Xen. I see no reason for Moka 5 that also offers MAC today not to do exactly the same thing better and to address handsets etc.

The protocols are also becoming too confusing, there is no standard. Choose one versus the other and you get locked in to vendor X, due to the other great VDI myth, Connection Broker (why do we need it really?). ICA is still clearly the leader here to address the broadest set of use cases. I'd like to say RDP is the industry standard, but the nature of MS will never let that happen. Do I really want a user calling the helpdesk, and having the helpdesk trying to figure out which protocol the user is using to troubleshoot some weird display issue that varies with location due to protocol switching? PC-IP, what's the future of ESX period given Hyper-V will kill them with the Windows OS?

So all that said, guess what, I am still a huge fan of VDI in all its forms and do this for a living. I just don't buy the biases of the vendors and argue that things will take time to shake out past 2010. Win 7 will help people evaluate, but mass adoption with the current mindset of the vendor offerings, I think marginal adoption. This is more of a 2011-2013 thing to me.


Brian - the context of a desktop as software installed and  bound to a machine has shifted to the desktop as a loose set of  IT services bound to a user.  The early market gets this & there is broad spread adoption around virtual desktop technologies that incorporate state separation and compositing. The management paradigm for this new class of desktop is orthogonal to the way physical desktops have been designed, imaged, provisioned and managed.  The new management paradigm is of orchestrating a set of dynamic services to a user (storage/IO/Compute + OS/Apps/Personality) to any device that the user is currently using while the old paradigm was about installing bits to a hardrive somewhere.  Seen from this perspective - the broker and Protocols are really just one model for consuming these services, while Type 1 client hypervisors are another. The centralized storage, streamlined imaging, high performance and better management are key customer motivations in the short term.  Till 2011 we are  going to see customer  design and deploy desktop virtualization along these two models (and they will be mutually exclusive for sometime to come).  Possibly in the 2011 to 2015 timeframe we're going to see a move towards the idea of a blended desktop where a dynamic mix of  storage/compute/io will allow OS/Apps/Userpersonality to transport between data center and device seamlessly and run where-ever latency makes sense.  

In all this there are two things I would not bet against - (1) Moore's law improving the performance of server and client devices and (2) bandwidth getting broader and cheaper.  Latency will be biggest challenge - Light will always travel at 299,792,458 m/s in this universe and that means that the problems worth solving are really around getting io closer to compute (storage/io virt) and then closer  to consumption (protocols) and making the IO more intelligent (self/content aware and dynamically organizing) so imaging gets solved as a result.  Stuff that just rides moores law or bandwidth explosion will be free eventually.

Winners - Intelligent Adaptive Storage (physical or virtual) that uses smart  IO semantics & Caching, Imaging, Orchestration, remote Protocol,

Losers- Hypervisor (server/client), Brokers, Block based Copy on write, storage protocols (iscsi/NFS/PVS)

Need to adapt/innovate/redefine - App Virt, OS instance based filter driver redirection technologies employed for File/Registry Copy on Write.

Chetan Venkatesh


Was going to type up a long response to this when I realized that two people already nailed it.  I'm with App Detective on pretty much everything he said and I too have a similar thought about timeframes for this to come to fruition.  Secondly, I'm right there with Chetan in my beliefs that all of these technologies are great, but it all comes down to latency and in order for all of these technologies to work well there needs to be some technology in there that determines where best something should execute to overcome / cope with the latency issues.  Well said, both of you.



Appdetective, kudos for the thorough analysis of the state of the environment as it is today. The single biggest issue we are seeing in the enterprise today is getting the funds necessary to realize the potential of desktop virtualization as a whole. It is like constantly fighting against the tide and it gets exhausting.

I think I have a different perspective on the topic coming from the enterprise and having to deal with alot of decisions that existed in this space before i got here (broker choice, hardware, etc.).

In the perfect world, we would simply take the best of breed from the 4 generic areas that Brian references in his article, ensure interoperability within and without each area, document the solution thoroughly, stand up the seed environment and, as the saying goes, "they will come".

The problem is that pain in the neck "reality" thing gets in the way. With the economy as it stands today, the money simply isn't there to really do the right thing to ensure it is the best solution that could be developed. This is especially true in the financial industry (read TARP).

I do see virtualization continuing to progress as Brian points to, but at a much slower pace. By then, say around 2012-2014, the question will be, for the enterprise at least, whether or not to develop and implement a solution in-house or go with the 3rd party provider in an ec2 cloud configuration or whatever is there at the time.

It will all come down to money; if the argument can be made that going in a particular direction makes financial sense for a business, they will eventuially go in that direction. I have seen this time and time again. I'm just worried that with the economy slowing everything down, will the (in-house) virtual desktop bubble be passed by for the next iteration of computing (cloud).

Paul Sisk



I'd argue that Desktop is a choice to implement with private or public cloud. In fact I think this is where we are most likley to see traction first, as other areas have a not more changes to deal with. Since you are in the Financial Industry why not use Public cloud to burst out for disaster revovery as opposed to maintaining many PC's all over the place. Also I am willing to bet your disaster recovery PC's are not up to date and hosted VDI is one way solve for that, granted with it's current limitations.


2 terms why the finance industry will lag behind others into the cloud....SEC rules and lack of funds (as it relates to any large technology project, not just vdi, cloud. Without these funds to do the research no decision will be made, period (ask Chetan)). In talking with the security engineering team of the idea, they came up with no fewer than 10 deal-breakers that have to be rectified before entering into a 3rd party cloud.

For the larger enterprises with huge, multi-national internal networks, I can see a discussion on the viability of an internally managed cloud scenario.

Insofar as our disaster recovery goes, the business units that have a requirement, namely traders and the like, do have robust DR plans thanks, in large part, to Softricity Softgrid (I know, MS App-V, I'm working on the acceptance). But I think we are digressing from the article.


I am in the financial industry and I never let rigid security folks push me around. I simply treat them as stack holders and let them have an opinion, and then ask them to defend it. I see Cloud as very valid for many progressive companies. Security folks have nothing better to do than say no to justify their exsistence, we have to choice to tell them where to go if their paranoia is stupid in 99.9% of cases.


@Paul - LOL - yes - spot on - getting mainstream traction on wall st and getting them to pay for anything in this economy  (let alone Pie in the sky VDI or DAAS) is  challenging.  Used to be that Wall St. (and i mean investment banks) early adopted to get a competitive advantage  and in the process helped promising technologies  get into enterprise shape. With the transformation of wall st away from Investment banking to a more traditional bank with a lot more scrutiny - this has changed.  

The challenge for VDI and DAAS players wil be to look for early adopters outside Wall St - and that is not easy.


"Citrix Provisioning Server now works offline."  Is this true?  I know about offline database support in 5.1, are there additional offline features?


Chetan - let me know if you find that company outside the financials, me and AppDetective may need to know ;).



I am seeing a lot of traction starting up in Government, and even more specifically Healthcare. With the H1N1 crisis evolving and EHRS (Electronic Health Record Solutions) slowly creeping in, Desktop Virtualization may be seen as a tool to drastically improve collaboration while minimizing costs.

Sure there is a global meltdown in economies around the world, but nobody should be surprised that the government will continue to throw money at IT. I see this all too often in different departments of the Federal Government.

Once June 2010 hits and if there is a viable option for not just niche solutions (which is neccessary in order to enable mass adoption), I see a lot of potential for increased adoption. Mass adoption is not what Brian is talking about and it wouldn't be practical to assume at this time.


Actually Brian IS talking about mass adoption (note the "wholesale desktop adoption").  Which is why my dates are out in the 2012-2015 timeframe, not 2010.


@Icelus, if all you want is an access solution for DR, then i think the TCO of a Terminal Server Solution presenting Desktops is hard to beat. In Citrix speak, no reason for XenApp not to used as the cheapest way to provide a Desktop use case for the masses. I see XenDesktop adding incremental value for the use cases where people need greater personailzation for day to day stuff. This is why I think XA=XD=XC as a single product longer term to address many Desktop use cases, with the added value of Applicaiton Presentation/Delivery which clearly Citrix need to something more with vs. emerging solutions like Quest, MS etc.



I hate being this nit picky but my interpretation of Brian's message is this: "READY for wholesale desktop adoption".

I may be over analyzing it, but I think it just means that by June 2010 Virtual Desktop technologies will be at a point that it will enable mass adoption, not that there will be mass adoption.


Excellent point, and I am not disputing that XA can't be used for essentially the same services as XD. I am just saying that given the fat desktop can be eliminated and centralized management of an entire desktop image (with centralized management of apps via XA included) offers more choice that management might want to grasp in order to obtain the complete service set they need to meet the business's demand.

Personally, I would determine if their requirement can be met by XA first before even thinking about XD, but I can only hope that the combination of the two will eventually be my preference once Virtual Desktops take fold.

Just my 2 cents.


100% agreed with Shaun and others. 2010 is an utopia for VDI for sure. More likely 2012-2015 timeframe. Too many things to list here why it is not ready. Not to mention the bottom line cost per user, especially with the great economy we have nowadays.

Just posted about this on my blog last week, explaining why I think Brian is WAY off in this prediction.



@boaff. I have the same question: "Citrix Provisioning Server now works offline." Did I fall asleep and totally miss something huge?




I think Brian might be a little ahead of himself and release schedule here.

True that offline DB is now a feature, but for _now_, no offline streamed OS.