I've been writing about the future of the Windows desktop quite a bit recently. My belief is that as the world moves to devices and tablets and web apps and cloud whatever, the lowly Windows desktop application will be reduced to middleware, only used to remotely deliver old-style Windows applications that, for whatever reason, can't be migrated to native or web apps. When that happens we won't manage Windows desktop OSes on our endpoints. Instead we'll just deliver Windows apps as a service, most likely with a remoting protocol like HDX, PCoIP, or RemoteFX.
For years I just assumed the best way to do that was via RDSH / Terminal Server. (In fact way back in 2010 I wrote the article "The inverse bell curve of Terminal Server / SBC: This stuff is going to be huge again!") I figured we'd want to deliver these apps with RDSH because it would be cheaper that with VDI. But now I wonder—will that always be the case? Or in the future, will it be easier and cheaper for us to use single-app VDI instances to deliver our legacy Windows desktop applications?
How this will work
Just to make sure we're all on the same page, let's look at what I'm talking about.
My idea is that if you have a traditional Windows desktop application you'd like to deploy to your users without worrying about what kind of client device they have or without worrying about managing their local desktop, your only option is to run that application in the datacenter and to deliver it remotely via a remoting protocol. The actual platform the application runs on will have to be Windows (ignore WINE since it's not relavant here). So I'm making the assumption that the two choices are (1) a single session on a multi-session RDSH / Terminal Server host, or (2) a dedicated desktop (Windows 7/8) VM for the user that just runs that one application.
If you're going with the latter option, I'm assuming that you build a Windows 7/8 master image for each application you'd like to deploy, and then when a user connects the session broker spawns a new instance of the VM for that user. That VM only exists for that one user's session and is destroyed when he or she logs off. (While I'm a huge advocate of persistent desktops for scenarios when you're delivering the entire desktop to a user from the datacenter, if you're just delivering a single app then non-persistent desktops should work fine.)
Using a single instance desktop OS instead of RDSH / Terminal Server is also nice because it means that our remotely-delivered Windows applications are running on the same desktop OS that we're familiar with. And running a single application per VM image means that we don't have to worry about conflicts. So all the pieces are there.
I'm not trying to ignite the whole "VDI versus Terminal Services" debate. (Though it's been a few years since I talked about that. Maybe it's time to update it?) But in terms of delivering pure Windows applications from the datacenter, I feel like a few years ago it was no brainer to do it via RDSH / Terminal Services. But now? I'm not so sure.
What do you think?