I was speaking with some folks yesterday about VDI and the future of our industry, and they asked me, "Why do you make such a big deal about offline VDI? Do you really think it's that important?" And actually, people wrote the same type of questions in the comments to yesterday's post, essentially saying, "look, just because a product doesn't do offline doesn't mean it's no good!"
Two good points. Here are my thoughts about offline, and what's important and what's not:
First of all, my dream goal for desktop management in a company of any size is that the desktops will be as easy to manage as the servers. I want IT folks to be able to take a modular approach: They build a Windows disk image, they package their applications, and they use some kind of management tool to assign which users get which applications. (And of course I want this solution to work for all use cases.)
In terms of building solutions today, terminal server-based application and desktop delivery is pretty close. We can support a lot of applications for a lot of users. Sure, it doesn't work offline, but it does support what Mike DiPetrillo calls the "work from work" scenario, which is the most important. But if that's the case, then why isn't 80% of the world using terminal server-based SBC today? One of the big reasons is that not all apps work when delivered via terminal server. Sure, VDI can solve some app compatibility issues, but the bigger issue is the fact that SBC means the apps are delivered remotely via a remote display protocol like ICA or RDP. And as of November 2008, the remote display protocols from the mainstream vendors are not good enough to work perfectly with all apps. There are two ways to fix that problem:
- Make a super display protocol that is compatible with 100% of applications.
- Move the application execution from the remote host to the local client device.
Lots of smart people are working on the display protocol problem. We're not there yet, but hopefully within the next few years we'll get to the point where the display protocols are good enough to work with all apps.
As far as the local execution of applications, there are a few ways we can do that today:
- We can "stream" the app with App-V / XenApp / ThinApp / whatever down to the client device.
- We can build the app into a Windows XP / Vista disk image and send that whole disk image down the the client.
Each of these solutions seems great at first, and these are the ways the vendors are telling you it should be done. But there are complications that are usually glossed over, such as:
- Streaming an app to a local OS requires a local OS. (duh) How are you managing this local OS? How are you maintaining and patching it? Anti-virus? Fixing it when the user breaks it? Managing Windows is hard work, and while streaming apps to a local device fixes the application management issue, it doesn't fix the Windows management issue.
- Building the app into the Windows disk image is nice (or streaming it into that image), because now you're managing Windows centrally. However, how do you get that copy of Windows from your datacenter down to your device? If you use Citrix Provisioning Server then you truly only have to manage one OS instance for your users, BUT, you'll have to make a new / modified disk image for every different type of client device you have.
- If you want to manage only one Windows disk image, you can deploy it into a VM running on the client (ACE, MED-V, etc.), but today, all of these client VMMs require an existing OS on which they run, meaning you're now managing the Windows OS in your VM disk image AND whatever OS is natively on your client device.
Bummer. Bummer. Bummer.
The goal has to be to create a single Windows disk image that can run anywhere. If the use case allows or requires it to run in the datacenter where users connect via a remote display protocol, fine. And if the user needs local execution (perhaps for app compat reasons) then let's stream that disk image down to a VM on their client, but we want that VM to be the lowest-level OS (apart from the hypervisor).
Thus all roads lead to the need of a bare-metal client hypervisor. VMware knows this and has previewed this kind of technology. Citrix responded by saying they were doing something here too (although they didn't tell us what). And there's a lot of rumors about Microsoft working on this too.
But the important take-away is that true desktop management bliss will require a client-side hypervisor BECAUSE we want to be able to run a single disk image on many different kinds of devices, and we don't want to have to manage a second client OS in addition to our main disk image.
Once we have client hypervisors, the "offline" use case will probably be something we get automatically. (Heck, you could even argue that offline is easier, since VMware offers an experimental version of that technology today, but the bare-metal client hypervisor is still missing.)
The bottom line is that while I've been talking about offline quite a bit.. I'm really most interested in client hypervisors. Offline is just a bonus.
The future of app streaming
One final note... Some people reading this might think I don't like app streaming. This is not true. I love app streaming and I think it's going to be very important moving forward.
What I don't like is app streaming to unmanaged Windows clients. (Again, maybe there are some specific use cases for customers today, and that's great.) But in general, the real value of app streaming in the future is that it will enable multiple users to share the same base Windows build (whether it's multi-user terminal server or single-user shared VDI). The app streaming will come in to dynamically customize and populate that Windows instance with the apps and data that each user needs.