In VDI environments, your users run their desktops as VMs in the datacenter. This creates a strange juxtaposition: We’re accustomed to running servers in our datacenters as super-reliable, controlled environments. But we’re accustomed to running the users’ desktops on non-redundant hardware that’s breakable. So if VDI is “desktops in the datacenter,” which philosophy wins out? Do we treat them like servers or desktops?
My sense is that in most cases, the “treat them like servers” argument wins out, so suddenly our desktops have five nines and all sorts of change control procedures once we move to VDI.
But does this make sense? Do we really need to have single “datacenter” OS that includes desktop and server workloads, or do those two workloads have different enough requirements that it doesn’t make sense to treat them both the same?
Again, my sense is that desktop VMs running in datacenters are generally run like server VMs running in datacenters because no one ever really sat back and said, “Hey, do we really need all this crap for our lowly desktops?” (Of course every environment is different, and certainly there are environments where availability and redundancy are driving factors the led to VDI. But in general, do you think anyone cares about live migration of desktop VMs?)
You know who does like this, though? VMware! Right now they have the strongest virtualization platform, and I think a lot of their VDI business is coming from people who are already strong believers of VDI who want to extend their VMware-based infrastructure out to their desktops. So VMware is really pushing the whole “we have the best platform” thing. They want people to just have a single datacenter OS that spans desktop and server workloads.
But how realistic is this? Even for customers whose only virtualization vendor is VMware, are they really running their desktops and servers in the same infrastructure, or do they have what amounts to side-by-side environments that both just happen to be based on VMware software? (Do you know anyone who runs desktop and server VMs on the same host? Is there anyone who truly has the “generic” host, spinning up extra capacity to cope with spikes in demand that are flexible enough to work anywhere.)
The problem this causes for VMware, of course, is that once you start down that path of “separate but equal” desktop and server virtualization environments, you’re just a short hop away from ditching ESX altogether for desktops and going with Xen or Hyper-V. After all, if we think desktop users don’t need all the fancy bells and whistles of our servers, so why pay for a hypervisor at all?
At this point someone usually adds a comment along the lines of “Your datacenter platform for VDI is still important, because a server failure affects dozens of users at once.” This is true. However I’m not suggesting that we treat our users and trash and run them on throw-away white-box hardware. We’re still talking about the “basics,” such as real servers with RAID and multiple power supplies and stuff. But even though ESX might win some performance benchmarks, Xen and Hyper-V are still running plenty of enterprise-class production environments. Even though ESX might be the best platform for VDI, Xen and Hyper-V are certainly “good enough.” (And “good enough” is how Microsoft entered just about every market it dominates now.)
The bottom line is that I propose that we evaluate our desktop needs and truly ask ourselves what’s important. Is vSphere a great platform? Sure! (And it will be even better when View supports it. ;) But does vSphere’s greatness mean that you have to extend your high-end infrastructure to desktops? Absolutely not. There is no problem re-evaluating your platform for your desktops and building it differently in your datacenter.