The hidden costs of VDI

Last week I gave a breakout session at Citrix Synergy's Virtualization Congress called "The REAL cost of VDI." This was not about the cost of losing your job if you built a bad VDI environment; rather, it was about the hidden costs of VDI that many people don't consider until they're like, "Oh shit" during the middle of their deployments.

Last week I gave a breakout session at Citrix Synergy’s Virtualization Congress called "The REAL cost of VDI." This was not about the cost of losing your job if you built a bad VDI environment; rather, it was about the hidden costs of VDI that many people don’t consider until they’re like, “Oh shit” during the middle of their deployments.

Before we jump into this, I want to point out once again that I like VDI where it makes sense. (Watch this video of me presenting “TS versus VDI” to understand where each makes most sense.) I should also point out that for this entire article, I’m talking about the VDI flavor of desktop virtualization, which is server-based computing with users connecting to single-user VMs running in a datacenter. I’m sure there are hidden costs in the other flavors of desktop virtualization too, it’s just that most of those are too new for us to understand the full cost structures yet.

A quick note about cost models

The purpose of this article is to discuss some of the unexpected expenses that crop up in VDI projects. It’s NOT about how to perform your own TCO or ROI analysis. While you can factor some of these things into your own models, it's really easy to lie or mislead with cost models. I'm not saying that cost all models are bad; I'm just saying that you can make them show whatever you want. (There’s a great book by Gerald Everett Jones called “How to Lie with Charts.” I’d love to write one some day called “How do lie with cost models.”) [December 2009 update: I did write this article! :)]

As those of you who’ve been reading this site for awhile know, VDI is just a flavor of server-based computing (SBC), just like Terminal Server. So when we’re thinking about the hidden costs of VDI, we can actually break those costs into two categories:

  • The hidden costs you find in any type of server-based computing
  • The hidden costs you find only in the VDI-type of server-based computing

Let’s first take a look at the hidden costs that we find in all flavors of server-based computing. This is the stuff that’s well known to old-school Terminal Server or Citrix engineers.

The hidden costs of server-based computing

Not being able to get rid of legacy systems

A lot of people implement server-based computing to save money. If you're thinking about doing this, it's important that you figure out if your new SBC system will entirely replace an existing system or if it will be in addition to an existing system.

For example, if you can remove every single fat client from your environment and go 100% SBC, then I think yes, there are huge savings there.

But if your server-based computing system can only replace 80% of your apps, then that means you still have to maintain your old system for the other 20%. That means you need your old patching system, app deployment system, etc. And in that case, even though the new server-based computing system is easier to manage than your old system, it actually ends up giving you negative ROI because it's a whole system you have to implement in addition to your old system. It's just that much more stuff to break.

Changes to user paradigms

Server-based computing offers a lot of advantages over traditional computing. Unfortunately a lot of the stuff that's really cool to us as administrators can sometimes confuse the users. And confused users lead to more helpdesk calls which cost money.

A great example is Citrix's SmartAccess capabilities that are integrated in their Advanced Access Control product. This set of technologies is amazing, and I've written about how awesome they are on several occasions.

Unfortunately they're also a brilliant way to confuse the heck out of your users.

For example, these technologies have the ability to scan a client machine and then change the way an application behaves (or even hide entire applications) based on the results of that scan. Good for us admins! But imagine this from our poor users' standpoint, where sometimes cut-and-paste works and sometimes it doesn't, or sometimes they see an app and sometimes they don't. The cost of supporting these users and their new technology is very real. And the problem is getting worse as the SBC vendors do more and more to “hide” the fact that users are using remote apps.

Thinking things will scale linearly

Since all SBC solutions (TS and VDI) consolidate user execution in the data center, we need to have a good understanding of hardware requirements and scalability before we buy hardware or even get approval for the whole environment. This leads to one of the oldest hidden costs there is, which is where we do a test and find that we can support x users per server, so we think that we can automatically support nx users on n/x servers.

And anyone who’s ever deployed Citrix knows it doesn’t quite work that way. ;) It' seems like there’s always some bottleneck that we don’t find until we’re working on the project that has to be addressed. So when we’re modeling this stuff, we need to think about all the “other” stuff that we need to scale up as we add users. Think about networking, backup capacity, disk bandwidth, domain controllers, etc.

The hidden costs of VDI

Even though the VDI version of SBC is a lot newer than the TS version of SBC, we still have a pretty good understanding of where the big hidden costs are.

Hidden cost #1: Using VDI

If you choose to use server-based computing, you need to understand that VDI is more expensive than Terminal Server. Period. This is something that everyone agrees on, including Citrix, Microsoft, VMware, and Brian.

By the way, in case you’re wondering “Why would anyone use VDI if it’s more expensive than TS?”, people choose VDI because it has features that they can’t get in TS. But these features come at a price; namely, money. :) VDI is more expensive than TS. And that’s ok, because IT is not about implementing the cheapest solution—it’s about implementing the cheapest solution that meets a business’s needs.

The reason I list this as a "hidden" cost is because a lot of people end up using VDI in scenarios where Terminal Server would work fine. This usually happens because they only compare VDI to their traditional environment, and they don’t even consider TS-based solutions.


Think about how much disk space all those copies of Windows running on user desktops consume in your environment. That's got to be what, 20GB per user? Now imagine if you implement VDI. You take 20GB per user and move that from cheap throw-away storage on your desktops to expensive SAN-type storage in your datacenter. That's a crazy cost!

Of course in today's environment, most people don't actually have all those copies of Windows stored in the datacenter. Today’s VDI deployments typically have data deduping in the SAN, or they use one of the “thin provisioning” solutions like Citrix Provisioning Server or VMware View Composer with Linked Clones.

The weird irony of these thin provisioning solutions is that they only make sense when all your users will use the same disk image. But if all your users can share a similar disk image, then why are you using VDI in the first place? Isn’t that what TS is for? And if your users each need their own totally custom disk, then you still have to manage, store, and back all that stuff up.

Another aspect of storage is the storage bandwidth between the VDI VM hosts and the storage locations. If you’re using Citrix Provisioning Server, you better ask around and find out what the server-to-VDI user ratio is. Same for View Composer with Linked Clones for VMware. Ask them how many VMs you can get per LUN.

Windows Licensing (VECD)

Remember that every VDI environment needs a VECD license, and that’s going to cost you $23 per device per year in addition to your SA license fees. (VECD jumps to $110 per device per year if you don’t have SA.) While that cost is about the same as a TS CAL (it’s not a hidden cost in the context of “TS versus VDI”), I represents a completely new costs if you’re jumping to VDI after having done things the “old” way for the past several years.

Complexity of the unknown

The reality is that VDI is still pretty new. The exact estimates vary, but there are probably something around 1 million VDI users in the world, versus about 100 million TS users. For TS-based projects, there are books, forums, white papers, articles… everything. For VDI, there’s…. well… there’s a lot less. Even for me, I feel like every time I see a VDI project I’m learning as I go along. Contrast that to TS projects that most of us could do in our sleep.

Again, remember this article is focused on the hidden costs of VDI. I’m not saying that exploring the new unknown is bad. (In fact I think it’s pretty good and cool!) It’s just that you need to understand that this new unknown exploration will cost more than if it were a known thing. (The cost of the unknown is mostly in wasted time looking for solutions and figuring stuff out—both during the project and in support afterwards.)

Not thinking about non-compatible apps

Most VDI cost models are based around multiple users sharing the same base disk images, and the idea is the most of these will be customized on-the-fly by having applications inserted on-demand with application virtualization technologies like App-V, ThinApp, XenApp streaming, InstallFree, etc. The problem is that these app virtualization products aren’t compatible with all apps, meaning that you’ll have to figure out some way to deal with your “other” apps that won’t work here. How do you do that? Do you deploy those apps to local workstations? Do you install them into the base Windows image for the VM? If you do that, how do you regression test them against each other? Or do you build multiple images?

All of this adds complexity that a lot of people don’t think about when their building the cost models for their VDI environments.

Not knowing Windows XP well enough

If you thought you knew Windows server before TS, I guarantee that after your first big TS project, you’ll really know Windows! There’s just so much stuff to know, like how SBC handles multiple users to kernel thread quanta scheduling to how the print router priorities different print jobs to how the userenv.dll loads roaming profiles. And all that knowledge is based on what, fifteen years of knowing Terminal Server?

Now imagine taking something like Windows XP that ordinarily runs on a bunch of desktops out in the office, and bringing that into your datacenter. You need to “datacenter-ize” your copies of XP and really (I mean REALLY) understand how they work (and how your hypervisor deals with scheduling and memory and I/O access and everything). And there are a lot of “gotchas” here that really aren’t that well known.

For instance, did you know that Windows XP will automatically defrag the important system files once it’s been booted three times? Guess what that does to your preciously tiny “delta” disk image files if you happen to have deployed a master disk image before that process kicked off? And while we’re on the topic of disk imaging, does anyone know what exactly should be included in a disk image, and what shouldn’t?

There are about 1,000 questions like that that the industry is just now figuring out, and the getting something like this wrong can cost you big time (either in time spent troubleshooting or wasted money buying too much hardware to cover for the poor performance).

Vendor products that change too fast

Citrix had a monopoly in the TS-based SBC space for over ten years. One of the nice things about a monopoly is the nice slow pace of product development and updates. That means that you don’t have to learn too fast and you don’t have to change your environment too fast.

Compare that to the blazing pace of development of VDI products. Both VMware and Citrix released two major products each in the past twelve months. These things are changing so fast—you just can’t keep up. And fast changes means more time spent learning and studying which leads to less time actually doing your job which leads to a higher cost.

These fast-paced changes also mean that you’ll end up changing and upgrading your system more often—another hidden cost.

Not knowing which vendor is going to “win”

This is only a problem if you pick the wrong vendor. :)

But it is a serious point. There are so many VDI vendors in the space today, and the space is too new to know who’s going to “win.” So if you buy a product from one vendor, and then in another few years the space becomes dominated by someone else, that means you’ll have the huge expense of either (1) migrating to the popular vendor, or (2) trying to support and ever-more obscure product.

Bottom line

I’m sure there are more hidden costs out there, but this is certainly a starting point of things to think about. Even if you don’t agree with every one of these points, at least now you can have a list and then check these off one-by-one for your environment instead of being blind-sided and thinking “I wish someone had told me about x.”

So until you’ve done this a few times, add 15% “wastage” budget item to every VDI project you touch (for both CAPEX and OPEX).

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I really appreciate your articles. I'm learning a great deal.

However I have to point out the inconsistency in your arguments. Last month you were posting about VMware's misleading comparison of VDI vs traditional desktops, This post your making the same comparison where it suits your argument, and comparing to TS where it doesn't.

Still we're finding it valuable to read the counter-argument while pressing on with our VDI project. So keep up the good work.


Nice write up, wish I was at Synergy to attend.  You touched on an interesting point "Not knowing XP well enough"  One thing I've noticed with VDI deployments is you're also taking the desktop environment, usually managed and maintained by a "tier 1" helpdesk/pc support group and shifting it to the "tier 2" server/network operations group.  You can argue a desktop is a desktop regardless of where it lives, but do you want inexperienced helpdesk admins provisioning new VM's, etc?  There is a big learning curve for some of those people who just know static PC desktops.



Thanks for the compliments. In this case, I don't see the inconsistency. I was upset with VMware @vmworld because they were ONLY comparing VDI to traditional computing. In that case, I agreed with them that VDI would be cheaper than traditional. Where I didn't agree was that they left out TS, which could have been cheaper than VDI.

Really you need to look at VDI, TS, and traditional and pick whichever one is cheapest that still solves the business problems you'd like to solve.

Today's article is kind of strange, though, since I'm only looking at cost. I'm not trying to convince people to use VDI or TS or anything, I'm just saying that if you choose one platform over another, there will be costs to think about.

Thanks everyone! Keep those comments coming...



I concede my objection is misplaced.

Possibly its because you're one of the few not leaping on the bandwagon, but you do come across as something of a VDI-hater.


very good point. We have both a desktop and a server team and we're planning to create a seperate farm (probably XenServer) for VDI, away from our VMware server infrastructure.

Still I wouldn't want our 1st line helpdesk having much power on it.


Good points Brian. I don't think there is anything wrong with choice to meet different business use cases. VDI makes sense in some use cases and so does SBC in many more at a better ROI. So being able to do both makes a lot of sense and solutions that enable choice are great.

Another cost considersation is the management of the Hypervisor. Today ESX is the popular choice and from there I think there is a false assumption out there that this means it's the obvious VDI choice. Outside of protocol etc, to your point about understanding the OS, I just don't buy that MS will not win this battle with Hyper-V. So anybody implementing VDI on ESX will have to figure out how to manage Hyper-V as well for performance over time. This is a huge consideration in terms of cost looking forward.

Also to your point about Storage cost, why does VDI have to be done on complicated SAN type storage. There is no reason that it can't be done on local storage at lower cost, better performance and less complexitiy. The false economies of scale with SAN just don't exisit.



I just want to point out that you're using the term "SBC" to mean "TS", right? Because I think VDI *is* SBC, so the better ROI in many cases that you point out is really TS, not SBC.

Sorry to nitpick, but this is a huge deal for me.

Good points on the other stuff though.


@appdetective RE: Storage

I see your point regarding SAN vs. local (DAS) storage. Any organization using Tier 1/2 FC SAN for a VDI deployment, unless negotiated at some ridiculously cheap price is paying way too much per gig for a WS /VM disk. With that said, last year, our outsourced vendor received a request for VDI for an off-shoring entity and to cut costs they requested 2 TB of DAS. I vehemently disagreed and argued that although DAS certainly is cheaper, using DAS for a VDI deployment of 10 plus users (in this case it was 80 per server) is like handing a sixteen year old a bottle whiskey and your car keys, nothing good comes from it. Not wanting to adhere to the warning, they proceeded with DAS and sure enough, right in the middle of their pilot, one of the servers started to exhibit several issues. Through months of intermittent PSODs, user outages, and troubleshooting they realized that there was a bad memory board in one of the servers. Although the impact was ultimately felt by the users, the lack of confidence in the product was diminished and I also feel they wasted too much time troubleshooting the server hardware. That being said, if the storage were shared, this issue would have been invisible to the users (VMotion/XenMotion) (Yes, I know you can also move DAS around, but if I am going to gamble, I am going to Vegas, the food and attractions are much better than the data center).

For a true VDI deployment with 10+ users, not using shared storage, NAS or SAN, is just asking for trouble. The money you save up front you will certainly burn through on the backend and unfortunately user confidence is not something you can put a price on. Bottom line, use the lessons and best practices learned from Server Virtualization.


The company I work for is currently getting killed by the “Hidden Costs of VDI” monster and it is all coming down to storage and the lost productivity due to issues that plague SBC, not just VDI (WAN latency, connections dropping, etc). From what I understand we should be taking delivery of some NetApp gear to leverage it on a NAS-ISCSI level and hopefully they decide to utilize the Data OnTap features. We haven’t even begun to look at SVI (Linked Clones). From my point of view, our current corporate image could not be anymore anti-VDI if we architected it that way from the start. Hopefully this year we can change that.


Hi Brian

I like your last point "Not knowing which vendor is going to “win” "

I think this is and will be a big issue... we see it everywhere in the technology industry – BlueRay vs. HD-DVD, Plasma vs. LCD (and now LED, soon OLED). And then the one I most like to compare it with – Xbox360 vs. PS3.

Where I see it like Xbox360 = MS Hyper-V, PS3 = VMWare ESX. Maybe we can take WII into the picture and call it Citrix XenServer  but as time goes, I only see it as Citrix will do what they are expert in – Build and improve existing Microsoft technology. Yes they will properly continue with the Xen hypervisor (We can call it the Citrix/Microsoft’s bare-metal hypervisor), but I think they will focus more and more towards Hyper-V or at least make sure every upcoming products will support Hyper-v and XenServer.  A new example is Citrix StorageLink.

Now we all know that when Microsoft enters the area (in this example Console marked and hypervisor marked) they will not give-up until they have reach their goal… cost is not a problem… (How much did they lose per Xbox360 they sold to begin with?)

Microsoft owns a bite of Citrix, Hyper-V and XenServer uses the same VHD image etc. same, same, but different... 

VMWare are the leader at this moment, as Sony was with PS and PS2. But Sony has lost a great deal after Xbox360 and Wii entered the marked. So will VMWare, it’s just a matter of time before they can’t implement new features faster than Hyper-V and XenServer. They have done it with VSphere4 (implemented new cool features), but still…take a look at the price…whooo… I was chocked the first time I got a price on an ESX3.5 (Same price in VSphere4, just some different licensing method 1 CPU vs. 2 CPU etc.)

I know this could be a total separate subject, but I really don’t see VMWare in the marked in 5 years, unless they lower their prices or maybe buy Citrix  (Again Microsoft owns a bit of Citrix so they will properly not sell and Citrix is also too big to buy for VMWare).



@Brian, Yes my SBC I mean TS, and you are right to point it out, it's what I did a bad job of getting across :-(

@Shanetech, let be through this out there. Today by good old fat desktops have a hard drive, and if they fail guess what they fail. I don't use NAS, SAN etc etc. So for VDI why should it be any different. I don't believe VDI means high availability storage for the masses nor is it a requirement. Hence I say TS (SBC) is the cheapest way to do this. VDI is a different use case "Session Isolation" and does not requireed expensive shared storage. That is the great myth that is being spread by a storage company that happens to control a virtualization company.....


@appdetective - I don't know how much I agree with that last statement.  A single hard drive failure on 1 fat desktop effects 1 user.  A hypervisor with only local storage failing effects a number of users.  That's why you have load balanced servers in your SBC environments correct?  Not so much for high availability, but some kind of fault tolerance and to eliminate a single point of failure.  Yeah you could quickly re-provision VM's with PVS or View and linked clones, but your users have no alternative until that process is complete.

Even as we now go down the road of quad-core Nehalem processors, and servers with 1TB of RAM, the bottleneck is now becoming disk I/O and not CPU or Memory constraints.  The more users you're pumping thru VDI the more spindles you need, its physics at this point - that's why I still prefer TS! :-)



My point is that with shared storage people are trying to reinvent the traditional desktop for high availability when that's never been the case for the vast majority of use cases. There is no need for shared storage for a traditional desktop, and the OpEx effeciences claimed by the vendors are not there for a very long time. Even when the mgmt tools mature to enable a layered approach and perform and scale, this still does not require shared storage. That is all I am saying. I am not saying TS does not have a billion benefits that meet other use cases but lack in others. As we get to greater core density I think that's good for TS and VDI, however one has to think about concentration risk for your business. For me that's important. Even if I can squeeze more sessions TS or VDI on to a box, I won't due to concentration risk, and to also allow for a more predictable user experience by not over committing memory or CPU. My OpEx goes up when the user experience is not predictable, so it's a fine balance. That need takes care of Disk I/O in my case. Also with respect to Disk I/O if you use shared storage, and need to boot a lot of VDI sessions (patching) the read hit is high and you will need EXPENSIVE cache on those boxes that kills your CapEx. Back to the orginal premise in this article, I 100% agree so many hidden costs, I agree TS is the best ROI and has mgmt tools today, mind share and you can hire people with skills. VDI meets many use cases where session isolation is required, shared storage is not required, new layered mgmt tools are not ready, so lot's of unknown ROI still, but many benefits for earlt adopters like me who do both VDI and TS.



I don’t necessarily think that the storage company’s are trying to mandate HA/FT for desktops and you are correct we currently don’t do it so why should be start but I think the key piece you are missing is that if you use DAS on a single XEN/ESX server and you have a complete server failure, you now have a work stoppage (wasted time and $$$) for ALL users on that server and not just one. It is simple FT/HA enterprise fundamentals. The statement that it just so happens that the Virtualization company is also a storage company only holds true if we are speaking EMC/VMware. I don’t want to get into use cases for VDI/TS and ROI,CapEx, and OpEx because those are such vast debates, my previous example was only to illustrate that if you are using VDI for any amount of users and there is a defined SLA and you are using DAS, you are taking a risk. That risk to some companies may be worth not having the cost of shared storage and if that business vertical dictates that those risks are acceptable then I agree, roll the dice, stay away from expensive SAN’s etc. My corporate enterprise experience only deals with regulated verticals where qualification and PMO activities are sometimes more important than the actual technology delivery but conversely, my consulting experience for the SMB market aligns more with your points.

I appreciate the challenge in ideas though, there is always something to be learned!



Good discussion. I just don't buy anybody has a DESKTOP that is more reliable than their local HD. So even if a blade goes down, that only represents a few desktops and hence my concentration argument vs. adding complexity. If I want Dyanmic failover with perosnal apps etc, It's easier for me to use TS with my favourite app virt vendor.


I totally agree with Brian's comments. VDI may be cool, but does it make sense as a 100% solution?

Implementing a virtual desktop infrastructure ought to be about using the most appropriate and least expensive platform that will host your applications. One of the things that made TS too hard for a lot of people was the difficulty of running all your applications on TS. That's one of the major attractions of VDI/DDI, but swapping integration costs for hardware costs really doesn't make that much sense when money is tight.

I think the 80% of stuff that runs on TS ought to be hosted on TS, and the 15% that only runs on a single user OS ought to be on VDI, and the 5% that needs a GPU/excessive CPU/memory should run on physical PCs (DDI). While Virtuozzo lets us blur the TS/VDI difference and is potentially the most efficient application hosting platform, no single application platform provides the most cost effective solution.

If centralized management and thin clients make sense, then use a TS desktop and publish seamless VDI/DDI hosted application on to that desktop as needed. Or if you're using laptops/fat clients publish seamless applications from all the platforms (TS/VDI/DDI).

Your remote users can "run" 99%+ of your applications, you've minimised your hardware costs and provided you have a consistent application deployment technology for the back-end, you don't have to go nuts trying to make it all work.


So, I met with Atlantis Computing yesterday, aside from it being a very fun meeting and having a renewed faith in IT Sales, it was also very educational. I had seen ILIO on Brian Madden.TV and I wanted to hear more about their product. I reached out to Atlantis and they were more than happy to meet up with me at our site. After our meeting I started thinking about how ILIO was looking at VDI. They are definitely taking a different view on the “Single Image” conundrum VDI finds itself in. Their view makes a lot of sense when compared to the VMware “View” (pun intended) Linked Clone technology. One take away form the meeting was: “Storage is a symptom, but not the disease”.

That statement really resonated with me because in our environment we are so storage heavy. Although we can bear the cost, those costs seem very wasted and like appdetective pointed out, we don’t have these costs/requirements in the physical desktop world so why should we in VDI?? Great question, and true, but the reality is that we have to bear them because of the FT/HA requirement inherent  to “many to one” (SBC) technologies and in the past data center storage never had the requirement to accommodate desktop VMs thinly and by the nature of windows these linked clone technologies, although they do work in some cases, do not work for all.

I think where Atlantis is going is in the right direction or at least in a different direction that will hopefully inspire other companies to come up with different ways to tackle the complexities of VDI.  


Well you are 1/2 correct on the above.

If you dont have SA you can purchase VECD (not for SA) and it will give you the full license for Vista Enterprise (and downgrade rights to previous OS's). Granted its at a higher cost, but it comes with SA.