Desktops users can be “second class citizens” in our datacenters

In VDI environments, your users run their desktops as VMs in the datacenter. This creates a strange juxtaposition: We’re accustomed to running servers in our datacenters as super-reliable, controlled environments.

In VDI environments, your users run their desktops as VMs in the datacenter. This creates a strange juxtaposition: We’re accustomed to running servers in our datacenters as super-reliable, controlled environments. But we’re accustomed to running the users’ desktops on non-redundant hardware that’s breakable. So if VDI is “desktops in the datacenter,” which philosophy wins out? Do we treat them like servers or desktops?

My sense is that in most cases, the “treat them like servers” argument wins out, so suddenly our desktops have five nines and all sorts of change control procedures once we move to VDI.

But does this make sense? Do we really need to have single “datacenter” OS that includes desktop and server workloads, or do those two workloads have different enough requirements that it doesn’t make sense to treat them both the same?

Again, my sense is that desktop VMs running in datacenters are generally run like server VMs running in datacenters because no one ever really sat back and said, “Hey, do we really need all this crap for our lowly desktops?” (Of course every environment is different, and certainly there are environments where availability and redundancy are driving factors the led to VDI. But in general, do you think anyone cares about live migration of desktop VMs?)

You know who does like this, though? VMware! Right now they have the strongest virtualization platform, and I think a lot of their VDI business is coming from people who are already strong believers of VDI who want to extend their VMware-based infrastructure out to their desktops. So VMware is really pushing the whole “we have the best platform” thing. They want people to just have a single datacenter OS that spans desktop and server workloads.

But how realistic is this? Even for customers whose only virtualization vendor is VMware, are they really running their desktops and servers in the same infrastructure, or do they have what amounts to side-by-side environments that both just happen to be based on VMware software? (Do you know anyone who runs desktop and server VMs on the same host? Is there anyone who truly has the “generic” host, spinning up extra capacity to cope with spikes in demand that are flexible enough to work anywhere.)

The problem this causes for VMware, of course, is that once you start down that path of “separate but equal” desktop and server virtualization environments, you’re just a short hop away from ditching ESX altogether for desktops and going with Xen or Hyper-V. After all, if we think desktop users don’t need all the fancy bells and whistles of our servers, so why pay for a hypervisor at all?

At this point someone usually adds a comment along the lines of “Your datacenter platform for VDI is still important, because a server failure affects dozens of users at once.” This is true. However I’m not suggesting that we treat our users and trash and run them on throw-away white-box hardware. We’re still talking about the “basics,” such as real servers with RAID and multiple power supplies and stuff. But even though ESX might win some performance benchmarks, Xen and Hyper-V are still running plenty of enterprise-class production environments. Even though ESX might be the best platform for VDI, Xen and Hyper-V are certainly “good enough.” (And “good enough” is how Microsoft entered just about every market it dominates now.)

The bottom line is that I propose that we evaluate our desktop needs and truly ask ourselves what’s important. Is vSphere a great platform? Sure! (And it will be even better when View supports it. ;) But does vSphere’s greatness mean that you have to extend your high-end infrastructure to desktops? Absolutely not. There is no problem re-evaluating your platform for your desktops and building it differently in your datacenter.

Join the conversation

15 comments

Send me notifications when other members comment.

Please create a username to comment.

Good points Brian. Being in the "VDI industry", this question has come up quite a bit. VDI gives you the oppurtunity to do fancy stuff like HA and such but most definately does not require you to do so.


In fact, we've spoken to customers on several occasions and said - hold on to your hats - why don't you use local storage for your virtual desktops ... GASP! :-).


Deploying TSes with local storage never used to be a problem. They are expendible just like virtual desktops (depending on what you want them to be).


Anyway, I guess this is my way of saying: good article. :-)


Cancel

I think that to a certain extent you have to invest in high end components for VDI deployments in order to achieve the consolidation ratios which make this kind of desktop delivery strategy viable. For example, I would say that blades are almost a necessity if you are deploying 1000-10000 desktops to an enterprise.  If you have blades then what is the extra cost associated with providing redundancy for the disk and network components, especially when you take into consideration the additional uptime that these relatively low cost components can provide.


You could get away with using local storage, but you lose some of the the associated benefits of being able to leverage storage technologies such as thin provisioning or de-duping, or being able to build a DR plan based upon the capabilities of shared storage. In a large deployment, these factors will be key.


As Michel Roth intimates, if you use a common pooled desktop for the majority of your users then using local storage could be a great solution for small/medium sized deployments as this provides the same availability we are used to with Terminal Services right now, you lose a server and you lose X amount of users, but they just have to log back in and get access via an alternate server. As long as the user data is safe and applications can be re-layered appropriately when the user reconnects then this will meet the requirements of a majority of customers.


As with all technologies that we are familiar with, we will end up with deployments that leverage many different delivery infratsructures. Some of these will provide ultra-high redundancy for critical users/applications, others will take the no redundancy/lowest cost route and many (as I am seeing today) will be a mixture of the two. Whatever the solution, it should be based upon a comprehensive review of requirements for the specific project in hand, and very few of these will be identical.


Cancel

This is a worthwhile topic for IT org's to debate BEFORE rolling out VDI.  Of course - the verdict will depend on how silo'd the IT org is, who carries the responsibility of end user downtime (server guys or desktop guys), and how direct the feedback loop is between end users and CIO :-)  


In regard to mixing or segregating server and desktop workloads, this comes down to organizational structure and the domain of control for the desktop and datacenter folks.  I know of customers that have the server group hand over two clusters of esx hosts w VC, View,and View Composer to the desktop group and they run with it from there. These decisions come down two acronyms that have nothing to do with technology - CYA and SLA.  From my interactions with customers, the choice of hypervisor between server and desktop workloads isn't about feature by feature comparison its simply the choice of a hypervisor that they know and trust.  I haven't heard of a company saying for desktops we'll use a tier 2 hypervisor because they are just desktops, not sure thats realistic.


My opinion about whether the hypervisor matters is simply that anytime you centralize you try to choose the best platform because more is riding on it and you can more directly control it.  Its no different w servers, a couple of years ago if you were a well-run Citrix shop then you knew about the importance of battery backed write cache for HP blades, if you were a casual Citrix shop then maybe you didn't even notice or care.  For a company that is serious about cost and manageability and scaling to 100's and 1000's of desktops - they are going to see the difference and care.


Cancel

This topic has had me in a quandary and internal debate for quite some time now.


For me, there is no yes or no answer to that question. My answer is, it “depends”.


And…in my experience, “depends” directly relates to what you are using VDI for and I speak mostly from an enterprise mindset with some SMB reality Kool-aid thrown in.


I agree with Brian that in a lot of cases we treat virtual desktops like virtual servers with respect to availability and in some cases, usability as well (sarcasm).


Looking at VDI from an availability perspective depends on your use case and availability requirements. In the case of virtualization, once we speak FT or HA, the technology itself binds us to the discussion of storage and its associated complexities and costs. But, this concept is no different in the TS/CTX world. No admin worth their salt would implement a single TS server for a mission critical highly available application/published desktop solution. Instead he/she would factor in a second, possibly third server that can “hopefully” handle the load if one server were down and use some type of hardware load balancer/session directory, or Citrix load balancing.


In my experience with what we did for offshoring, creating VDI islands with DAS has posed a significant risk and impact to us. We did have a memory board completely fail and had 50+ virtual desktops down. (yes, we could have swapped drives or done some other  IT voodoo but given our regulated environment, we are not afforded that luxury)


Mind you, we are using “individual desktops” or “one-to-one”, so the ability to simply bring up a new VM quickly is not at our disposal since we have a very customized and controlled build process. In this case, DAS should not have been used. But I do not question the usage of VDI for this use case. It quickly provisioned our mature desktop environment to developers in remote location to maximize efficiency and decrease internal staffing without deploying physical assets to the edge. (we are also a VMware ESX/VDM shop)


Now, if I look at another use case, Kiosks, I may want to run a very lean XP build on Hyper-V. In that case, there is no need for FT/HA and the capital expenditure would be quite minimal. However, in a service oriented model such as “Gloabal Outsourcing”, it may cost me more to support than VMware since I now have to request a custom offering of a skillset that might not be readily available. The Enterprise support model is much different than the SMB model where there is a Jack-Of-All-Trades IT hero running around like a nut installing every product he can just to save a buck.


I think a lot of the “Second Class” debate around VDI can always be brought back to the debate around whether to do VDI in the first place. It does have its place. However, once you embark on a castration of the architecture and the delivered VM itself you may want to question why you are even doing VDI since your requirements are clearly telling you that you might just be better off with a TS/CTX implementation.


I constantly hear from my peers, “We don’t do it on the physical desktop so why do we care about change control, storage, etc.” Although I am able to identify with a lot of those points, you have to look at this much in the way we looked at TS a few years ago. In the TSE 4.0 days, would you ever think about installing every HP print driver a user requested? No way, you wouldn’t even entertain it. Hopefully, you would have had some type of change control and integration process that would qualify the functionality of that driver. Or, you just rolled the dice, and when the sever blue screened you used your myriad of disks and tricks to bring the server back.


I know that example may not be 100% applicable to modern day VDI but the concept is the same. When you start to deploy many to one technologies, you must at least define their availability requirements, support costs, and associated risks. I know for me, I do not want to be the guy on the other end of the phone from 50+ down VDI or TS users for that matter.


Cancel

@Help4ctx Not sure why you say "blades are essential". I wouldn't have thought the hardware footprint (blade/rack) has anything to do with the level of density of VDI.


Everyone keeps saying "the Hypervisor" is a commodity and the real play is in the management.... Why introduce VMM to manage a different Hypervisor when you already have Virtual Center (or the other way around).


At the end of the day, it will ome down to $$. How many $$ are you willing to part with to have a single hypervisor management point? I would suggest the answer to this is "Not as many as VMWare is asking me to."


BTW: I don't buy the whole "Manage your ESX with VMM" argument.


Cancel

Firstly this was a good addition to the discussion much of which I agree with.


community.citrix.com/.../Desktop+Virtualization+is+not+Server+Virtualization


I don't understand why we keep insisting VDI has to be done on shared storage. Use freaking local disk, it works and gives you the isolation you are looking for. It's stupid to put a zillion desktops on a single box and increase concentration risk. If you want low cost use TS period. VDI is not about lower cost in many many use cases, so get over it, as the link above eludes to it's about agility.


@shanetech. With respect to your point about reliability concerns with TS, doesn't the same problem occur with Brokers. I've said it before the brokers are not needed in a high availability environment. One should be just be able to connect without the broker. Example I don't need a broker to make an RDP connection. Why can't I just buy ICA/HDX or other to just allow me to connect for my high tier users. Let the brokers evolve over time with advanced management to help me lower costs for other classes of users.


@brian. Yes TS is getting better, but isolation is not the only reason for VDI. LIcensing on a Desktop OS is also an issues, note a-holes at Bloomberg, despite the fact they use TS with ICA/HDX internally!!!!!! Additionally delivering all my desktops on a Desktop OS makes it easier to manage. That said I don't see TS going away. For apps TS is a huge benefit for presenting apps from a remote distances where the data is away from my desktop. This is cheaper than spinning up another desktop. Even if I needed another desktop from a remote location, TS is a great option.


So we need tiers of desktops that allow you deliver different service levels in the real world using different methods. Not just one way.


Cancel

@shanetech reading your post again and stealing a quote from Citrix 'it's a desktop' I disagree with your assertion that TS fail over should be like Desktop failover. Today if a PC fails then there is no magic failover. They either have a spare, there is a break fix event, or there is a BCP event. Why are people trying to over complicate this thing. 'it's a desktop' right?


If you want more than just a desktop then that is not going to be free, and will take time to evolve and come with a set of new problems. So KISP keep it simple people is what I think this is all about. Too much over engineering confusing the heck out of the industry.


Cancel

@appdetective


It is always a pleasure to read your responses.


I think we will have to agree to disagree here with respect to high availability for the broker. In my experience (and in this case I relate it to VMware VDM only), we did have a complete broker failure. We had close to 250 virtual desktop users down. We were not running a replica nor was the architecture designed with a DNS alias and some type of hardware load balancer in front of the broker(s). So, as I have illustrated before, we used DAS (no ESX host redundancy) and one VDM broker (no broker redundancy). We had a host failure and a broker software failure. Bottom line, we incurred multiple outages to a lot of users because we went the route of “it is just good enough; hey after all, it is only a desktop!” These outages translated into lost revenue, bad moral and an unrecoverable loss in confidence from the user community by further breeding an already existing mantra of “IT SUCKS!” Bottom line, one desktop blows up, big deal; you have one pissed off user who was probably pissed off at something else anyway. 100+ go down and you now have a lynch mob with a purpose, and that purpose is to take out everything on IT from Lincoln’s’ assignation to the space shuttle Columbia disaster.


I agree, one should be able to simply say, “Use RDP for now”, however when you scale a solution like this, contacting every user to tell them to use a different connection mechanism is confusing and costly to manage. Please try to keep in mind I base my opinions on numbers.


Lastly, I disagree (in our use case of VDI) with the quote from Citrix, “it is just a desktop.” In the case I cited in the previous post, it is not a desktop but an application delivery mechanism, a tool if you will. A tool that needs to be looked at much like you deliver TS/Citrix. The appeal of TS/Citrix is the redundancy, flexibility and security. What we did was use this tool to overcome a limitation in TS/Citrix that was specific to our environment. An environment mind you that encompasses over 6000 already packaged applications for Windows XP, not 2K3 Terminal Services. Therefore, enabling applications on TS/Citrix, for us, can become costly and time consuming (there are many reasons why, mostly because we operate in a regulated environment). VDI, in this case is a turn key application delivery mechanism to a mass of users where deploying physical assets poses a challenge for may political, economical, and security reasons. However, we have not done VDI to save money, we knew the costs. Like my dad used to say, “If you have to ask the price, you can’t afford it!”


Clearly there are two schools of thought. One is that VDI is a “desktop replacement”; the other is that it is simply an application delivery augmentation mechanism for the physical desktop. Until the VDI space matures, (I have to disagree with Brian on this, I think it will be more like 2011 Q4 -2012), the virtual desktop is still very distinguishable from the physical one. In my experience, based on what I have seen and done, VDI has been used as application delivery augmentation solution installed in parallel to already delivered physical asset.


Cancel

@shanetech. Good discussion. The reason you had issues was that you broke the 'it's a desktop' thing by introducing a single point of failure into your infrastructure. If you had instead said a single user connection to a desktop 1-1 and implemented a connection architecture that provided the same risk boundary you would not impact as many users. The fact you stuck a broker in the middle means it's NOT a desktop anymore so I agree in that case you need to build in a lot of complexity to make it scale and be reliable. Do you see the subtle difference in what I am saying here, and why I totally agree 'it's a desktop' the rest is just vendor single image BS for a long time until they prove a real TCO story which will take time to mature and scale.


This same logic also applies to why sticking 80 VMs on a server is just stupid. The exposure is too great for many use case.


This is also why you need the ability to just directly connect to the host without the broker without the user knowing the difference. Just like a Desktop allow me to CONNECT.


If you want application delivery then design for that use case and don't confuse that with delivering a desktop. Applications can be delivered in many ways including ESD, and for your ESD I am sure you don't allow for single point of failure.


So 'it's a desktop' is a very insightful statement IMO. It's exactly this type of mix and match use cases that is leading to poor implementation.


BTW nothing personal here, just keeping it real I ALWAYS say what I think to help and sometime may offend, so apologies in advance.


@Brian suggestion. There should be more information on your site and coverage about how people are successfully implementing this stuff. It's all very nice reading about this technology and some geek tool sets but I really think understanding how to implement, why, which use cases etc would make for great reading and eduction.


Cancel

@appdetective


Agreed, great discussion. I see your point and I thank you for challenging my ideas and by no means am I offended. I also do say what I think and try to base what I post on what I have seen and done, regardless if right or wrong in your eyes or anyone for that matter.


One last thing, I agree, the broker concept is another added technology to complicate things which is different from the physical desktop that just allows the user to "CONNECT", however.... and we could do this dance all day, your assertions on our use cases are myopic since you do not have our full set of user requirements nor do you know the business. Would be more than happy to fill you in offline.


I see the subtle difference. I really do and here is why:


About three years ago we built 50 Compaq workstations and enabled RDC and gave out one to one mappings to the desktops. It worked. It was also a *** to manage who had access to what VM and MACs (not the platform) were a nightmare. Not to mention the data center owner had had enough of us polluting the floor with desktops. Then came ESX 2.5, so we were able to get the machines off the floor and into a rack. Great, but we still had the one-to-one MSTC mapping thing going on and now we were close to 250 VMs. The nightmare got worse, entitlement was impossible, users were connecting to each others VMs and being that our build is very heavily tied to the user profile and one time personalization, none of their apps worked. Then came offshoring and the requirement to have minimal NAT routes and minimal ports opened. Well that blew the old MSTC one-to-one thing out of the water so then someone decided to publish MSTC on the Citrix Web Interface and give them that. OK, so we solved some NAT and PORT complexities but we still had to manage what the user was entering. Then came VMware VDM 2.0 and in parallel a security mandate that said all traffic to offshore partners must be over SSL. Oh great! Moral of the story… a broker was needed in this use case, however, we should not have built it with a single point of failure. Like I said earlier, it was the mindset that “It’s just a desktop” who cares about redundancy.


I post this stuff to illustrate what has happened to us. It’s not right or wrong, it just “is”.


I agree, sticking 80 VMs on one server is stupid, especially when those VMs are on DAS. We rolled the dice and lost. My whole point to this was the “Second Class’ question, and specially around implementations that require HA. What we are doing now, is 76 VMs per cluster. Each cluster consists of 2 Servers - 4 Quad Cores, 128GB of RAM, 4 Quad Port NICs, 300GB LUNS (NO DAS). Expensive? YES! Could it have been done better/different? YES! Could I have done my life better to this point? YES!! ;-)


Cancel

@shanetech. As long as the broker is a critical piece that prevents you from making a connection to a Desktop that is already running in the datacenter it breaks a key design principle in my book. If the broker dies I should still be able to connect to my last know good desktop. This will make it much more like a desktop :-) and it means while the brokers need failover for numerous reasons you bring up, it does not have to be a disaster if it goes down. The Desktop dial tone must be there.


The 1-1 management overhead is valid if you don't have a good way to deal with it. I have, but I get why many folks don't. So a non critical broker should help us get to the it's a destkop....world.


Cancel

@appdetective - This is one of the key reasons why I'm so pissed off the the ICA/CGP listeners are not active out of the gate.  The whole notion that the broker has to instruct the XenDesktop session to listen on ICA is retarded.  I know Citrix is moving further and further away from a PN-like client, but BOY would it be nice to be able to just open an ICA Client, type in a PC name and BANG be ICA connected on a 1:1 mapping to a preassigned VDI machine.  I recognize this isn't part of their whole Agile/Provisioned/Brokered mechanism, but I sure bet it would greatly simplify the mess that is today's Virtual Desktop Agent.


Shawn


Cancel

Great post and very interesting comments. We are looking for more feedback regarding a HDX / ICA only connection scenarios. This thread has identified some desired production use cases, we also are interested in demo/trial scenarios to get HDX in the hands of more IT Pro's to put to work and get familiar with XenDesktop.  


Lets us know your thoughts & votes.


community.citrix.com/.../toS8B


Cancel

@Chris Fleck, please make Shawn Bass happy :-) Seriously, this would be a key feature to add to your XD product. If gives a new way to adopt the product for a tier of users while the brokers mature. Good to see that you guys are listening.


Cancel

My 10 Cents would be to give you this to think about...


Servers = many users = Predictable behaviour


Desktops = single User = Unpredictable behaviour


Servers = slowly peaks = time to adjust resources (eg. vmotion)


Desktops = instant peak = no time to adjust resources = users wait


So basically combining these two different types of load behaviour could actually increase the overall density of VM's, leveraging the oppesites of them to a mutual benefit.


Really


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close