2013 is the year that two of the biggest blocks to VDI adoption are finally solved. Here's how.

I've made a lot of predictions in the past that have been wrong, most notably that by 2010 all the technical components would be in place for wholesale VDI adoption. (Though in my defense in those days we used the term "VDI" to describe all types of desktop virtualization, including client-based VMs.)

I've made a lot of predictions in the past that have been wrong, most notably that by 2010 all the technical components would be in place for wholesale VDI adoption. (Though in my defense in those days we used the term "VDI" to describe all types of desktop virtualization, including client-based VMs.)

2010 came and went, and VDI isn't the runaway success everyone thought it would be. After all, back in 2009, Gartner predicted that by 2013, VDI would be a $65 billion industry with 49 million users. We spent a lot of time in our book "The VDI Delusion" digging into the reasons why VDI hasn't taken the world by storm, so I won't rehash everything here.

Instead I want to point out that it looks like we now have the technology to address two of the major showstopper issues for VDI. Does that mean that 2013 is the "Year of VDI?" Hardly! But it does mean that two more of the barriers that previously prevented people from going to VDI are being addressed, so the total VDI addressable market is bigger now than it was six months ago.

So what are these two technological advancements?

1. We now have the storage technology to support persistent (1-to-1) disk images

I've written again and again that for VDI to really succeed, we have to be able to deliver persistent desktops. While that's been theoretically possible since 2006, it's always be very expensive. Just about all VDI marketing over the past seven years has been about "image sharing" or "non persistent" images. (And most of that is the tail wagging the dog. Vendors push image sharing not because it's better but because it's cheaper. And while that's true, it's not easy to implement and one of the core reasons that VDI didn't take off.

But fortunately all that is changing! We're seeing tons of new storage vendors enter the space who can fully support persistent disk images. (I liked Tegile and Datacore early on for this. My current favorite is Atlantis Computing who can do it in all software. But really there are probably 20 companies who can do this now. Greenbytes, Virsto (recently bought by VMware), Nexenta, and a bunch more I'm forgetting.

Long story short: In 2013, if you want to do VDI with persistent desktops, you can. (And for a price that's not too crazy.)

2. GPU-based graphics improvements mean we can now support most apps

The other big change to VDI now is that we have support for GPUs in our VDI host server. This works in two ways:

First, our individual desktop VMs can now access shared physical GPUs in the VDI servers, finally allowing users to run applications that require GPUs. That's a huge win.

Second, and potentially more important, we now have hypervisors and remoting protocols that leverage the GPUs in VDI servers to do hardware-based encoding of the graphics streams for the remoting protocols. This means that we can have higher quality graphics over lower bandwidth connections, and that doing so doesn't put a huge load on the server since it's handled by the GPUs and not the CPUs. (This is all based on NVIDIA's "VGX" technology which was announced last year and is just now making into the VDI products like XenDesktop and Horizon View.)

2013: The year of !

2 VDI milestones crushed

In 2013, VDI solutions will continue to fall in price in terms of dollars per user. I still don't believe that VDI is cheaper than traditional desktops and laptops or that it's easy to manage when you compare apples-to-apples, but regardless of that the price is coming down.

To be clear, I also still don't believe that we should replace all the corporate desktops and laptops with VDI. VDI makes sense in some cases, but so does RDSH, client VMs, and well-managed traditional clients.

So like I said I'm not going to actually call this the year of VDI. There are still a lot of scenarios where VDI doesn't make sense. But it's definitely true that in 2013, more people have the option of VDI than ever before. And that's not a bad thing.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

From my perspective, with most of my customers, Microsoft licensing, had been and continues to be the number one barrier to wide scale "Full VDI". Technical issues are secondary.


Break MS down. I have heard of $2/year VDA but that could just be myth, or maybe it's because they got so much money from us already... An org with 360,000 users on Outlook would make any MS rep happy.


Very interested in what you/others see as the 'New Top 2'?

I'll agree with Eric and give 1 of my 2 to MS licensing.  The 2nd I see is the battle to achieve favorable OpEx benefits on thick endpoints.. True zero-clients for fixed location use cases are great but what about devices with a Win/Linux OS?  That is coming up in almost every customer conversation I have...


Yeah it's funny that there will always be a "Top 2" no matter what's solved.. really there were (are?) probably 5-10 big reasons people couldn't move to VDI, and it just so happened that the Top 2 were solved. But that doesn't mean we're out of the woods, and of course the old reasons #3-10 have no basically moved up to be the new #1-8.


There will always be a "top 2", but at some point they are small enough challenges that we reach inflection point where a new tech (in this case VDI) is compelling and gets wide adoption. Are we there yet? Maybe.

The vast majority of desktop virt is still done with Terminal Server/Session Virtualization, because it's still significantly cheaper and licensing is much more favorable, and it does what most businesses need - it delivers Windows apps onto any device, anywhere. The 3 biggest arguments in favor of VDI used to be: device support, app compat and user personalization. Device support has improved a lot in WS 2012, and app compat and user personalization can be solved with 3rd party tools.

So the only reasons I see persistent VDI growing instead of TS/SV are that: (a) persistent VDI is easier up-front (but comes with most of the same desktop mgt challenges after deployment), (b) VMware has given View away in Enterprise Agreements.

I still like pooled/non-persistent VDI in concept, not least because you can do away with expensive storage altogether and it is infinitely scalable. However, it requires the same change in user mindset to  "this is not your desktop, you don't own it" that you had with the move to TS/SV.


I am curious why no one seems to see Remote PC as a major hurdle to getting past some of the VDI stigma.  While supporting remote users at a previous employer we found, to the utter dismay of management, that 70% of remote users preferred their hardware to their corporate issued laptops.  With Remote PC you dont have a windows license issue, you don't have to build out a six-seven figure back end investment, the server team isn't being forced to become the Desktop team and the same support apparatus stays in place.  This is far more seamless than existing VDI endeavors as you are just loading a VDA on existing hardware.  

Not saying Remote PC is the perfect choice but I am shocked that it does not get more ink.  

Use existing storage, existing licenses, existing helpdesk, just add a DDC and a Netscaler and you are ready to go.


I continue to see "quality issues" on the server side of well-known SBC/VDI solutions. Things that prospective customers don't think of when blindsided by trade show glamour, but these things show up later and kill eval projects.

Being a software engineer I sometimes say "Arrggh how come they didn't test THAT".

Cutting a long story short, I believe SBC/VDI vendors need to invest more in software quality and make sure advertised features actually WORK.

[Before somebody asks: No, I am not a competitor of anybody there; I - and my company stratodesk.com is agnostic to the different SBC/VDI technologies but we depend on working server products :-)]


How about DaaS? Desktone? What happens if Amazon (think AWS) gets into this space?

Each is a solution for specific use cases, but almost every time the costs continue to be very high.


I disagree with some of the trivializing of persistent vs non-p.  Look Non-P would be great if it was easy to do and meant I truly eliminated my need for PCLM.  The problem as I see it is we rarely get away with using one way of doing things.  So if we're going to have a heterogeneous environment then this means I need to now have two ways of doing things.  One PCLM model using SCCM for OSD and ESD and a separate ILCM (image lifecycle management....should trademark that puppy) which is a complete and utter duplication of the same PCLM management challenges overlayed on top of yet another technology.  It's complete nonsense.  The solution?  1) Fix your persistent storage issues with one of many technologies like Brian has already discussed.  2) Get really good a PCLM with a tool like SCCM (yes I'm talking medium/large enterprise here).  3) Revel in the glory of having one simple way to manage things.  4) Get a drink and be happy.