Is offline VDI really that important? Yes! (But not for the reasons you think.)

I was speaking with some folks yesterday about VDI and the future of our industry, and they asked me, "Why do you make such a big deal about offline VDI? Do you really think it's that important?

I was speaking with some folks yesterday about VDI and the future of our industry, and they asked me, "Why do you make such a big deal about offline VDI? Do you really think it's that important?" And actually, people wrote the same type of questions in the comments to yesterday's post, essentially saying, "look, just because a product doesn't do offline doesn't mean it's no good!"

Two good points. Here are my thoughts about offline, and what's important and what's not:

First of all, my dream goal for desktop management in a company of any size is that the desktops will be as easy to manage as the servers. I want IT folks to be able to take a modular approach: They build a Windows disk image, they package their applications, and they use some kind of management tool to assign which users get which applications. (And of course I want this solution to work for all use cases.)

In terms of building solutions today, terminal server-based application and desktop delivery is pretty close. We can support a lot of applications for a lot of users. Sure, it doesn't work offline, but it does support what Mike DiPetrillo calls the "work from work" scenario, which is the most important. But if that's the case, then why isn't 80% of the world using terminal server-based SBC today? One of the big reasons is that not all apps work when delivered via terminal server. Sure, VDI can solve some app compatibility issues, but the bigger issue is the fact that SBC means the apps are delivered remotely via a remote display protocol like ICA or RDP. And as of November 2008, the remote display protocols from the mainstream vendors are not good enough to work perfectly with all apps. There are two ways to fix that problem:

  • Make a super display protocol that is compatible with 100% of applications.
  • Move the application execution from the remote host to the local client device.

Lots of smart people are working on the display protocol problem. We're not there yet, but hopefully within the next few years we'll get to the point where the display protocols are good enough to work with all apps.

As far as the local execution of applications, there are a few ways we can do that today:

  • We can "stream" the app with App-V / XenApp / ThinApp / whatever down to the client device.
  • We can build the app into a Windows XP / Vista disk image and send that whole disk image down the the client.

Each of these solutions seems great at first, and these are the ways the vendors are telling you it should be done. But there are complications that are usually glossed over, such as:

  • Streaming an app to a local OS requires a local OS. (duh) How are you managing this local OS? How are you maintaining and patching it? Anti-virus? Fixing it when the user breaks it? Managing Windows is hard work, and while streaming apps to a local device fixes the application management issue, it doesn't fix the Windows management issue.
  • Building the app into the Windows disk image is nice (or streaming it into that image), because now you're managing Windows centrally. However, how do you get that copy of Windows from your datacenter down to your device? If you use Citrix Provisioning Server then you truly only have to manage one OS instance for your users, BUT, you'll have to make a new / modified disk image for every different type of client device you have.
  • If you want to manage only one Windows disk image, you can deploy it into a VM running on the client (ACE, MED-V, etc.), but today, all of these client VMMs require an existing OS on which they run, meaning you're now managing the Windows OS in your VM disk image AND whatever OS is natively on your client device.

Bummer. Bummer. Bummer.

The goal has to be to create a single Windows disk image that can run anywhere. If the use case allows or requires it to run in the datacenter where users connect via a remote display protocol, fine. And if the user needs local execution (perhaps for app compat reasons) then let's stream that disk image down to a VM on their client, but we want that VM to be the lowest-level OS (apart from the hypervisor).

Thus all roads lead to the need of a bare-metal client hypervisor. VMware knows this and has previewed this kind of technology. Citrix responded by saying they were doing something here too (although they didn't tell us what). And there's a lot of rumors about Microsoft working on this too.

But the important take-away is that true desktop management bliss will require a client-side hypervisor BECAUSE we want to be able to run a single disk image on many different kinds of devices, and we don't want to have to manage a second client OS in addition to our main disk image.

Once we have client hypervisors, the "offline" use case will probably be something we get automatically. (Heck, you could even argue that offline is easier, since VMware offers an experimental version of that technology today, but the bare-metal client hypervisor is still missing.)

The bottom line is that while I've been talking about offline quite a bit.. I'm really most interested in client hypervisors. Offline is just a bonus.

The future of app streaming

One final note... Some people reading this might think I don't like app streaming. This is not true. I love app streaming and I think it's going to be very important moving forward.

What I don't like is app streaming to unmanaged Windows clients. (Again, maybe there are some specific use cases for customers today, and that's great.) But in general, the real value of app streaming in the future is that it will enable multiple users to share the same base Windows build (whether it's multi-user terminal server or single-user shared VDI). The app streaming will come in to dynamically customize and populate that Windows instance with the apps and data that each user needs.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

"why isn't 80% of the world using terminal server-based SBC today?" I don't app support to be that big an issue REALLY. This might sound pompous, but I believe most of the world is just not as enlightened as many of us that are regular readers of this site. The world doesn't use SBC because it doesn't know how to do it successfully.

That said, I too love the idea of client hypervisors. Here is a related blog from Gordon Payne, the GM of Citrix' Delivery Systems Division. Look at the post "Virtual Desktops, Mobile VDI and Client Hypervisors - Oh My!"


Yeah, I certainly agree about the whole world not being enlightened to the coolness that is TS. Another factor could be cost.. if peoples' current systems are working, then why bother changing it out? (Even if TS offers a theoretical lower TCO.)


I've been looking at moka5  recently and they seemed to have a lot of what you mention above. Especially with their bare-metal option.  Have you checked them out?


Re: Dot point three about MED-V and ACE.

If you didnt want to use linux like moka etc, as the base you could potentially use XP Fundamentals if you have SA and then lock its state via deepfreeze/steady state. You would then need to config the App install and vhds to be on another partition, so the config can be persistant after reboot.

But all that mucking around makes you wish for a baremetal hypervisor.



You and I seem to agree (but for different reasons) on the need for vendors to get a good bare-metal hypervisor for desktop machines that can work "off network" at times our the door.

I don't believe vendors are very far off, technically.  But they haven't really pulled it all together yet.  I remain hopeful for the day when I have a desktop hypervisor so I can grumble about the lack of good central management tools!  It would be progress.


Brian -

Reading this I can't help think:

 Academic Theory vs. Pragmatic Practice.

There are a handful of very interesting Windows-centric offline virtual desktop solutions that don't require a baremetal client hypervisor and don't require a second / guest copy of Windows.

Think Next-Generation Application Virtualization OR light-weight Desktop Virtualization.

Waiting or blindly betting on an industry standard baremetal client hypervisor to materialize in the next 2-3 years will just mean, you're sitting on the side-lines. In the meantime, mature, practical and cost-effective solutions for desktop virtualization will get deployed.

Don't get me wrong there will continue to be a couple of baremetal hypervisors shipped based on Linux but the hardware support problems will cripple their adoption until both Microsoft & Intel agree on a standard.  Microsoft will agree on a standard they day they ship Midiro which is Windows v8 at best.

Does anyone remember 10 years ago when the security experts and academics of the world said PKI (digital certs) is the future and best way to do secure authentication? Everyone who listened to the academics waited for it to mature and materialize and just sat on the sidelines AND the pragmatic world who got a solution working a decade sooner went with tokens (2-factor auth via RSA SecurID, Secure Computing, etc.)

The parallel is bare-metal client hypervisors is the new PKI. At a technical / academic viewpoint it is great but in the next 2-5 years another industry standard will emerge that will just run on Windows and work really well similar to what happened between PKI vs. Tokens.

One architecture was academically more elegant and potentially more secure but, at the time, impractical and had a very long time to mature. The other approach was less elegant architecturally but far more practical and delivered plenty of benefits with a far shorter road to mature.

PKI is on its way back (10 years after its hype) because its managment is finally mature and the explosion mobile handheld devices see its benefit.  However, in the Enterprise, tokens are far and away the market leader and de-facto standard for user-based strong-authentication.

Academic Theory vs. Pragmatic Practice.

Good news, this is all playing out now in desktop virtualization.  The question is, who's team are you on? Theory or Practice?




Client hypervisors could be a great thing, but I'm concerned that we're where secured execution environments for VPN's were a few years ago.  We just can't count on a user environment that will support a baremetal hypervisor any time soon.

I'm also concerned that the data loss risks will continue to be too high for loss aware environments.  Once the data's on the client, however secure we might want it to be, it's too hard to create a sturdy container.

If we can constrain the client environment, we can get there.  It seems like this is at odds with ubiquitous access, though.

At the end of the day, better remote viewers seem like the solution that we can get to sooner.  Bandwidth is going to be free soon anyway, right? Or was that last century?

- Eric