Introducing “Madden’s Paradox”: the gotcha of the VDI versus TS debate

I've tried (unsuccessfully) over the years to introduce new industry terms. (xASP, xDI, farmlet, VDI+... The list goes on.

I’ve tried (unsuccessfully) over the years to introduce new industry terms. (xASP, xDI, farmlet, VDI+... The list goes on.) Even though I’ve failed repeatedly, I’d like to try again by introducing “Madden’s Paradox,” which describes the scenario where you decide to choose VDI over TS, but then when you try to make your VDI environment manageable, you end up removing the functionality that was the reason you chose it over TS in the fist place!

Let’s explore this. First, our assumptions:

  • “VDI” means connecting via a remote display protocol to single-user desktop OS instances running as VMs on a remote host.
  • “TS” means terminal server, (or what Microsoft is now calling “session-based remote desktops”), where a client uses a remote display protocol to run a desktop as a session on a multi-tenant Windows server.
  • “Server-based computing” is the concept of using a remote display protocol, and describes the method of access for both VDI and TS.

Okay, so VDI and TS are both server-based computing. This means that as you’re designing your environment, you first decide which applications and/or users need server-based computing, and then you decide which flavor of server-based computing you want: TS or VDI. Since TS is cheaper, it makes sense that you’d always choose TS except in the cases where you have a specific business requirement that can only be solved via VDI. I’ve written ad nauseam about why you’d choose VDI over TS, but one of the reasons that VDI lovers keep on talking about is the fact that users run a real Windows desktop OS that they can personalize in ways that can’t be done with Terminal Server. Fair enough.

However, the very next slide in most of these Pro-VDI presentations then talks about how you need some kind of storage management system that lets many VDI instances share a single master image. (Citrix Provisioning Server, VMware View Composer with Linked Clones, NetApp Flex Clones, etc.) The problem with using all these products in VDI environments is that you essentially turn your VDI back into a Terminal Server (in terms of all users sharing the same image), essentially killing the reason you were using VDI in the first place! Every single dynamic personalization technique that exists for VDI also works on Terminal Server. (User Profile Manager, User Data Disks, Group Policy Folder Redirection, third-party tools from AppSenseRTOREStriCeratScense.

So that’s the paradox: You only choose VDI because you want server-based computing but TS can’t cut it, and then to manage your VDI environment, you implement a shared master image system that makes your VDI behave just like Terminal Server.


Of course this paradox doesn’t exist for every single case. There are other reasons people choose VDI, although I still argue that many of its advantages are lost now that Terminal Server has per-session IP addresses, a multi-session installer service, and CPU management. Oh, and don’t forget that App-V or whatever app virtualization package you want to use works just as well on Terminal Server as it does on VDI, so that’s also not a reason to go to VDI. My point is this paradox only applies if you’re using VDI because it’s easier for users to install their own apps and own their own environment.

With that, let’s continue our democratic ultimate peer review process. Please share your thoughts about Madden’s Paradox. Am I right on or am I missing the point?



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: