NOTE: This article was written over a year ago. Since posting this original, I've also posted a half-way point follow-up in June 2009 to see how my predictions are coming out far.)
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Today, VDI is a niche solution. What's keeping VDI from taking over all enterprise desktops? There are four basic technical capabilities that are required for it to become mainstream:
- Single disk image for many users
- Remote display protocols that are indistinguishable from local
- Local / offline VDI
- Broader compatibility for app virtualization
All four of these are coming very soon. In fact, they'll all be in-place and mature in the next 24 months. Let's take an in-depth look at each of these technical capabilities that are still required:
Requirement #1. Single disk image for many users
VDI is fundamentally about regaining control of enterprise desktops. In traditional dekstop environments, each desktop has its own hard drive and therefore its own unique disk image. With VDI, you pull all of these enterprise desktops into your datacenter. Do you really want to manage one disk image for each user? Of course not. From a fanancial model (in terms of SAN storage and Patch Tuesday efforts) it doesn't even start to make sense.
But what if you only had to manage one single "gold" master image that all users would share?
All users sharing a single master image? Sound impossible? Sure, it sounds impossible the first time you hear it. But remember that this is exactly how Terminal Server works today. Every single user loads the same generic desktop when they connect to a server. Then we use app streaming and roaming profiles and login scripts to customize that generic desktop for the user. The same will apply in the VDI space.
Why this will be solved by 2010
Citrix Provisioning Server / Ardence already solves this problem today (as it has for years). So we don't need to wait for 2010 for this technology to be real.
Beyond Citrix, VMware demoed some technology called "Scalable Virtual Images" (or "SVI") at VMwold Cannes this past January. SVI is VMware's version of this concept, where a single disk image is shared by multiple VMs at the same time. (This differs from VMware's current shipping approach, where a full "clone" must be made of the master, so each VM is one-to-one with their respective clone.)
In addition to these technologies handling the actual mounting / mapping of disk images, they also handle the mechanics and logistics needed for multiple running machines to share the same disk image. At a minimum, they must take care of things like the fact that each machine needs its own computer name and SID. Ardence handles this by intercepting disk calls and pulling requests for registry keys containing SID and computer name details from a database instead of the actual shared image. VMware's current solution is to sysprep the master, but this leads to a somewhat involved process for the creation of each clone (as sysprep was designed for deploying permanent physical PCs instead of VM disk images).
What we'll see moving forward is something akin to "fast sysprep," where the VDI companies identify which disk blocks in the disk image file contain the key information (again, SID and computer name, for example). Then when a new VM needs to boot from the master image, the "fast sysprep" (or whatever you want to call it) will simply and almost instantly "pre-create" a disk delta file that's just a few kilobytes in size from a central database that has the initial customizations of the master that machine needs to boot up.
To make this 100% transparent to the guest VM, this would need to still be done at the disk block-level. This would mean that these delta files would be invalidated every time the master image was updated, but since they can be created instantly and on-demand, this is not a problem.
Requirement #2. Remote display protocols that are indistinguishable from local
I've also written in-depth on this in the past. The short version is that right now, we have two ways for delivering applications: (1) Terminal Server / Citrix XenApp, and (2) local / the "old school" way. Why aren't we delivering 100% of our apps via Terminal Server or SBC? One of the reasons is that today's mainstream display remoting protocols just aren't there yet. There are some apps that are too graphically-intense that just won't work via ICA or RDP.
By 2010, all of the VDI products will have remote display protocols that are indistinguisable from local computing.
Why this will be solved by 2010
Qumranet's Spice protocol is 100% real and available today (albeit only as part of their Solid ICE VDI product). Teradici's PC-over-IP chip-based hardware is real and 100% available today. Both of these protocols support all types of apps with performance characteristics that are indistinguishable from local computing (given enough bandwidth).
There are two more promising protocols on the horizon. One would think (hope?) that Microsoft's acquisition of Calista would produce a baseline RDP product with some phenominal capabilities that are real within the next 24 months. We also have VESA's Net2Display. While that's been delayed several times, hopefully that's also real in some form in the next two years.
The bottom line with regard to protocols is that with what's real today and what's coming soon, this should be a general capability that's available to whomever needs it in June 2010.
Requirement #3. Local / offline VDI
Today's VDI solutions are server-based computing (SBC) solutions. Sure, they're connecting to Windows XP instead of Terminal Server, but fundamentally they're still SBC.
But what if we can run a hypervisor or VMM locally on a client device? What if we can run our Windows XP VM locally? This does two great things for us:
- We don't have to worry about the protocol problem as outlined in Requirement #2.
- We can potentially run the VM offline, removing the single biggest downside of SBC.
Remember, SBC has many advantages. Central management, instant access from any client, great performance for three-tier apps, and "eyes-only" security. Running a Windows XP VM locally is not SBC and is not appropriate for all scenarios, but, where SBC-solutions don't work, being able to extend an existing SBC-based VDI solution into the local / offline world will be huge.
Why this will be solved by 2010
VMware has had their ACE product for years that was a basic version of this. At VMworld Cannes earlier this year, VMware demonstrated what they're calling "OVDI," or "offline VDI." Think of OVDI as what happens when VDI and ACE have a baby. You can right click and "take offline" a remote VDI instance. You can run it locally, offline, reboot it, etc. When you're back in the office, you can right click and "take online," syncing your disk image deltas up to the server.
This OVDI concept is not pie-in-the-sky "someday" technology. This is actual prototype stuff that we saw running live at VMworld.
Another positive factor we have in this space is the fact that Microsoft bought Kidaro this past March. Kidaro was a management wrapper for Microsoft Virtual PC that gave it a lot of ACE-like abilities. At this point there's nothing to synchronize Kidaro with on the backend, but I'm sure Redmond is up to something.
Qumranet announced "Splice" last week at BriForum, which is technology meant to help move VDI instances closer to users in WAN environments.
Even though all of these are just basic sets of functionality or just prototypes, there's enough going on in this space now to know that this will be solved in a big way by June 2010.
Requirement #4. Broader compatibility for app virtualization
One of the real benefits of local PCs today is that power users can install whatever apps they want. This is not possible in a Terminal Server environment since a single app would potentially be available for everyone and potentially really screw up the system. Sure, admins can use remote application delivery (seamless apps delivered via ICA from XenApp) or application streaming (SoftGrid / Symantec SVS / Citrix Streaming / VMware ThinApp / etc.), but there are two problem with these technologies today that are preventing widescale VDI replacement of physical PCs:
- Not all applications are compatible with the app virtualization / streaming products of today.
- Today, only admins can package apps for virtualization. There is no "user self-packaging" option.
Solving both of these app virtualization problems will enable Requirement #1 listed above because we'll be able to truly operate the desktop as a "layered stack," with the OS layer provided via VDI, and then the apps and user environment layered on top of that.
Why these will be solved by 2010
In terms of broader app compatiblity with app virtualization technologies, that's just a slow march towards an ultimate goal, with more and more apps becoming compatible day-by-day, month-by-month.
With regards to "user self-packaging" of apps, one of the downsides of today's app virtualization products is that only admins can package, prepare, and/or approve the apps ahead of time. If we want to truly give power users the power to control their own environment, we need to let them install their own apps. Unfortunately the whole "sharing a single master disk image" thing is fundamentally not compatible with users being able to install their own apps.
But what if the user environment management products were smarter? What if the app virtualization products were smarter?
The way that many applications are packaged for virtualization or streaming or isolation environments today is that an admin goes to a "clean" machine, clicks "record" in the packaging software, installs the application, and clicks "done" in the packaging software. Then the packager bundles up all the registry changes and files that were added into the package that's to be distributed.
But what if the user environment product could put the entire user's session in "record" or "package" mode? Then the user could install some random application whose settings could be abstracted out into a "personal applications" layer of the stack.
I don't know of any products that do this today, but many of these things are getting close. I don't think this would be too far of a jump.
(For what it's worth, this might not be a hard requirement if you believe in the employee-owned PC concept, as in those cases you could limit the corporate VM to centrally-managed apps.)
Why June 2010? Why not June 2009 or December 2008?
We have four key technical capabilities that must be in-place before companies can start the wholesale replacement of "old school" desktops and laptops with VDI-based desktops and laptops. Many of these technical capabilities are available in one form or another today, and many others will be available a lot sooner than June 2010. So why am I predicting that this will take 24 months to shake out? Several reasons:
First, VDI is bleeding edge today. Sure, there are some interesting and specific use cases that make sense. But no one is really going to VDI for general desktop computing across-the-board. So let's say that Citrix or VMware or Microsoft enables one of these key technical capabilities in the next few months. Do you want to be the first to implement this and see what happens? Really there's no hurry. Are your current desktops burning a whole in your pocket? Is there any real reason to replace everything you have now?
This space is going to change so much over the next two years. If VMware releases some cool feature, you know Citrix will one-up them, then VMware will respond, etc. Plus, all of these technical capabilities that come out over the next 6-9 months are all going to be v1 things. So we made it this far with the dual technology (old school local + TS-based Citrix) apps. Why not wait a few more months or a year?
There is no pressure to be bleeding edge. Don't be tempted to jump on the VDI train right now (unless of course you have a specific tactical reason to use VDI today). Save your money. Take a year off.
Second, most people are waiting for Windows 7. Even in June 2008 (18 months after Vista), people just aren't deploying Vista in a big way. At this point, people are happy enough with Windows XP. I can't tell you how many conversations I've had with companies over the past year where they basically say, "We're skipping Vista and waiting for Windows 7. And when we do Windows 7, we're not going to do it in the same way that we've done things all these years.
In June 2010, Windows 7 will be out. The four major VDI problems will be solved. Everything will be in place to do VDI in a big way.
June 2010 - June 2013
Beyond June? Second-half 2010 and into 2011? This is when VDI seats surpass SBC seats. By 2012 / 2013, we're seeing VDI seats surpass the number of "old" seats in enterprise environments.. 300m VDI clients by 2013.
A quick note about Terminal Server versus VDI
Once this VDI thing takes off in a few years, we probably won't see many published desktops in TS environments, because the advantages that you get with TS over VDI will largely by gone.
However, using TS as a basis for XenApp seamless apps delivered via SBC is a huge use case. VDI is about desktops. XenApp is about apps. This mainstream VDI thing will largely replace managed desktops. But many of those desktops will receive their apps (or links to apps) via traditional SBC. (And by the way, the better quality remoting protocols will just help TS-based app delivery be that much stronger.)