A short guide to virtualizing Presentation and Terminal servers on VMware ESX 3

This article was written by René Vester, but the author got mixed up during a migration. Basic tips to help virtualize Terminal Servers and Citrix Servers on ESX3 Sever Based Computing has, for a long time now, been the most used method of centralization and consolidation of applications.

This article was written by René Vester, but the author got mixed up during a migration.

Basic tips to help virtualize Terminal Servers and Citrix Servers on ESX3

Sever Based Computing has, for a long time now, been the most used method of centralization and consolidation of applications. Bringing applications to the user without the worries of physical location has been a key aspect in centralizing the infrastructure. Seeing as VMware is often a key player in the disaster recovery procedures, it is important to have at least some virtualized Terminal or Citrix Servers. The better the performance we can pull from these servers, the more users we can get back to work during a disaster.

I have spent a while trying to find the best practice for running virtualized Terminal or Citrix Servers, and I will try to go through the things I have found. The performance improvement of the servers is not really something that can be rated, as it all depends on the individual setup of applications and load. But the suggestions I will mention here can be used together, or individually, in whatever way works best for your environment.

There are two levels in which these changes can be made: Host settings, which apply to an entire ESX host, and Virtual Machine settings, which apply to the individual virtual machine only.

Virtual Machine settings

Advanced settings

Just a quick run through of setting the advanced settings on the Virtual Machine:

  1. Select the virtual machine in the VI Client’s inventory Panel, and choose Edit Settings from the right-button menu.
  2. Click Options and then click Advanced.
  3. Click the Configuration Parameters button.

    Rene Vester Image 1
  4. In the dialog box that is displayed, click Add Row to enter a new parameter and its value.

Here are a few options that can be of interest in the Terminal and Citrix Server environments:


Enables memory sharing for a selected virtual machine. This boolean value defaults to True. If you set it to False for a virtual machine, memory sharing is turned off. This can be a way to switch off memory sharing without modifying the host ESX.


If this is set, VMware HA will first try to fail over Virtual Machines to the host specified by this option. This is useful if you want to utilize one host as a spare failover host. It is not usually recommended, however, because VMware HA tries to utilize all available spare capacity among all hosts in the cluster. If the specified host does not have enough spare capacity, VMware HA tries to fail over the virtual machine to any other host in the cluster that has enough capacity.

This is useful in case you modify the ESX Host settings on some of your ESX Hosts and want to make sure your Terminal and Citrix Servers failover to the desired and configured host.


Three modes:

  • Any – (default) The virtual CPUs of this virtual machine can freely share cores with other virtual CPUs of this or other virtual machines.
  • None – The virtual CPUs of this virtual machine have exclusive use of a processor core whenever they are scheduled to it. The other hyperthread of the core is halted while this virtual machine is using the core.
  • Internal – On a virtual machine with exactly two virtual processors, the two virtual processors are allowed to share one physical core (at the discretion of the ESX Server scheduler), but this virtual machine never shares a core with any other virtual.

This setting is defined in the .vmx file

One CPU pr. Virtual Machine

Dual-processor Virtual Machines incur more overhead; this overhead has a dramatic effect on Terminal Servers that is not seen in other types of servers.

Disable Hyperthreading

Hyperthreading is not a guaranteed performance boost and if disabled will increase performance on terminal servers. This is a costly option however, as this will also translate virtual CPU’s directly to physical CPU’s, thereby making VMware less attractive from a price perspective (depending how you look at it). This goes for other servers as well not only terminal servers.

General considerations

Some other things you should consider for your Virtual machine are:

  • Use LSI Logic SCSI Controller
  • Disable unused COM, LPT and USB ports within the Virtual Machine
  • Disable auto-detect for CDROM
  • Disable visual effects in Windows Virtual Machines
  • For Citrix Presentation Servers, do not over-allocate memory

ESX Host Settings

Disable Page Sharing

Disabling Page Sharing will reduce the amount of vmkernel overhead in Terminal Server and Citrix virtual machines. On those guests, the memory changes often and the vmkernel would have to constantly generate the hash-values for the memory pages and compare them.

Rene Vester Image 2

To disable page sharing, change the following:
Mem.ShareScanTotal = 0
Mem.ShareScanVM = 0

VMKernel Configuration

In order to avoid using PAE, set the Mem.AllocHighThreshold option to 4096. This will cause the vmkernel to use the memory below 4GB primarily for the VM’s, saving resources that are consumed by PAE.

Rene Vester Image 3

Set Mem.AllocHighThreshold = 4096

To virtualize or not to virtualize

I think it’s a question which has been raised a lot of times over the last few years, and I will not pretend to know the answer. I believe that the question is best answered on an individual basis, and I have found that having the best possible tools in the toolbox gives me the best chance for success.

For this article, I have tried to summarize what I have found on specific Terminal and Citrix Server optimization in a virtual environment. These same suggestions may or may not be appropriate for other virtualized platforms.


VMware Infrastructure 3 Online Library
Robin Prudholm, Senior Systems Engineer, VMware Nordic
Daniel Euteneuer, Senior Systems Engineer, Presenting at VMWorld 2006

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Hey Guys,

 Had a few comments from readers about their experiences with virtualized terminal services and citrix. Only a few have had any good experiences and i would like to point out more clearly that this article is in no way meant to encourage virtualizing your entire TS or Citrix Farm, it is meant as an aid to make the experience the best possible with a more or less virtualized TS or Citrix platform. Virtualizing part of the farm is also in my oppinion a strong tool in preparing for a possible disaster recovery.



We moved to a virtual citrix farm and if we could go back to physical we would.  We were expecting great things using vmware and citrix but that is not the case.  we only use standard apps like Office and a few 3rd party citrix certified apps and as soon as we hit 18 users per Server the farm dies :-( When we have 2000+ concurrent users this is a problem as we have 80 Virtuals (25-28 Users per box).  We've started to add our old kit (Physicals) into the farm and from some extensive testing users notice performance improvement on a physical when the same amount of users are connected to a physical or virtual. 

Spend your money on physical Servers instead


I dont think anyone really suggests virtualizing everything should improve performance compared to physical hardware. However virtual hardware has benefits too and as a wise man said not too long ago "its really a question of seeing the tools we have and apply them in the situations where they most beneficial". Virtualizing, Streaming, monitoring, optimizing ... Tools.. for us to apply where we get the best use.

Using a hammer to put a screw in a wall IS possible, but it is alot more elegant and beneficial to use a screwdriver.



Could not agree more about the LSI Logic storage adapter.  We have also found that there are different flavors of this beast, some of which exhibit complete lockups under heavy load.  This is worth double-checking.  There are many VMWare articles on this and it is a bit confusing.

We have seen major performance improvements with the new hardware (8-core, dual processors) and SAN attached storage instead of direct attach.  Which are you using?

One final thought - our performance team complains that they cannot get good data on VM's due to time drift.  This is an obvious problem if you want to measure performance.  Is this still an issue on the latest flavor?


Greg Askew 


Perhaps, but most of the benefits of VMware and DR can be achieved by using Ardence and since Presentation Server is already fault tolerant why spend/waste the money on feature like HA?


Since when is over-allocating memory a problem with anything assuming you have plenty of memory to go around? I don't doubt the statement, but could we get some deeper understanding as to why this has the ability to improve performance in Citrix Servers on a virtual platform?
Why do people expect magic when using products such as Vmware ESX? If you use it properly it will reward you well but it doesn’t prevent bad decisions and loading mistakes.  People who say don’t use ESX with Citrix just because they made bad implementation decisions should be ashamed. If you have the technical know how and means, most solutions will benefit and should be based on some type of server virtualization.Stop physical server sprawl and save the world.



How so?  That is a very ingnorant statement...  ESX is suited only for a handful of implementations as there are better virtualization techniques that will yield the same benefits.   If you wish to save a tree, don't use ESX and you will end up using less hardware for a typical citrix deployment than you would with the ESX tax.



We saw similar performance (15 users per server) with a VSMP dual-processor virtual machine configuration. We found a single processor VM performed much better (40 users per server) than the dual. IIRC, monitoring showed the dual-processor VCPU %Ready stats were very high; indicating the dual-processor VM was ready to process, but the host could not allocate available CPU.

In this particular case, we had an application that required local deployment at remote sites and 40 users per server was sufficient to run the application. We did not consider converting the centralized farm of 1500 users to VM because we could get 60+ users on a physical server.

Why use VMware when you can use XenSource!  :)

Vincent Vlieghe also created some nice guidelines. Always usefull when you consider running CTX/TS within a VM



It'll be interesting to see the Citrix Best Practices in reference to running a TS/Citrix server on Xen.

Use Xensource with Presentation Server?

Has anyone tried?

This will be the challenge for Citrix. I am sure Citrix tested PS on Xen before the purchase.

There has to be more to this purchase than Citrix is admitting.

Citrix will make 50M in revenues from Xen next year? They would have been better off investing 500M for a 10% return each year. Better yet - they should have jumped on the VMWare IPO for a 100million return already. I know there is something we are all missing.


Works like a champ!  Just watch out for drive remappings.

In fact, I hear more and more Citrix customers are looking at XenSource now rather than VMware.  :)

I would be curious to hear from those reporting negative results virtualizing Citrix on 2 proc VMs with VMware.  Was it on ESX 2.5.x or VI3 (3.0.x) servers??  It does make a difference as ESX 3.0.x has a more efficient scheduling system for dual-proc VMs then ESX 2.5.x.  In general, it would make sense to validate VMware bashing with the host version so we can understand if VI3 truly is the limitation or if it was just a bad experience with ESX 2.5.x.  Even worse, if a poster is bashing VMware because of bad performance using the hosted product VMware Server.

Xen working with PS is one thing but how is the performance compared to VMWare.

I am sure more Citrix customers will be experimenting with Xen but we need performance comparisons from a non-biased source.


The limitation with the scheduler and vSMP did not change.. With 2 vCPUs you need to have 2 logical Processors free in order to schedule a thread.. Stupid if you ask me, but that is the limiting factor as appies to both versions.  The reason people think it's better in VI3 is because the new hosts have 2 and 4 core CPUs thus giving ESX more opportunities to schedule (have 2 processors in a READY state)



I think it really comes out depending on your environment. I have an old Dell PE2600 dual 2.4 Xeon, 3GB, ESX 3.0.2, with 4 VMs running on it with one being a PS4.5

 This PS VM server has 20 concurrent users and is working like a charm

PS4.5 VM config : 1.2Gb RAM, 1 vCPU and VMDK on a CX3-20 Raid 5 LUN

On the other VMs I have a Domain controller, Printer Server (All business apps prints goes through it) and McAfee ePo server


Perf stats:

CPU constantly less than 50% with some pikes during backup time frame 

RAM using 2GB out of the 3GB

So again, for some, using TS or citrix in a VM is very interesting. i have a copy of the VMDK and i can swap it at any time since on this server nothing is saved as i have a seperate file server (in a VM) 

VMware has to intercept every OS call to the processor chip. Xen relies on virtualization aware processor chips (Intel-VT and AMD-V) so that Xen doesn't have to intercept any OS calls to the processor. This is why Xen is 1/20th the amount of code that VMware is, and why VMware has overhead that Xen doesn't, which hurts VMware's performance.  

As I understand VMWare suggests VI3 AND QUAD core cpu's for Citricx PS. I do not know (no experience) if QUAD core CPU's have the same problem (X2) as DUAL cores.



Why VM when you can Xen?
We recently upgraded our ESX Servers from HS20 IBM Blade Servers with dual xeon 3.0 ghz procs to the LS21 with 2 Dual Core AMD Opterons 2.0Ghz.  Another enhancement was upgrading to 4gb Qlogic Fiber Channel cards from 2gb to our SAN.  We are running MetaframeXPE on both platforms using the best practices mentioned, but there is a huge difference in performace between the newer servers and than the older ones.   So I would say the DualCore  procs helped in some way.  Another thing I wanted to share is, we are prepping to go live with ArcGIS applications on Presentation Server 4.5 64Bit on a 64 Bit Dual Proc VM Guest and it smoking fast compared to running the 32 bit VM Single Proc on our Intel Based Blades!  Long ago I tried using SMP on Citrix VM's in the past, but  as mentioned in the article more overhead occurred and performance degraded.  Not so with the 64bit VM's and PS 4.5 Citrix.  Our testing showed CPU ready times increased a little (not much), but overall feedback from our testing was that performance was much faster when processing maps inside the GIS application (by the way which is a 32 bit application).

Anyone else using / testing 64bit yet within Virtual Machines and Citrix / Terminal Services???

bit of a vague statement, any reasons why? pro's & con's. have you had experience using both?


We have a client with 3 Vmware 2.0 Servers, hosting over 900 Citrix Users on a poorly designed San, (Raid 5)Quad x Dual Core cpu's, And Performance Rocks. over 30-40 Users per server- no problem.

Now with VI 3.0, we are pushinc the boundry even more. We have a different customer with a hard core limit of 22-23 users per Physical server. No mater how many cores. With VI3, on Same server, but upgrade of ram-, We are able to 6x the amount of users per physical server. IE 6 VM's 21-23 uers per VMCTX server. There are places this works, but it needs to be tested, / tested/ tested. for full capability. With todays new procs.. i can envision much more density..

ALSO as a note.. RULE of thumb.. OVerload the ESX server With NICS!!!.. In my analysis, with CitrixVM's,, the amount of packets flowing from a ESX server climb way past 5000 packets per second. I have nt ran into issues with user limititations, except, when i under load the servers with network connectivity. Then it starts to suffer quickly. Nics seem to have been the cluprit in most of my installs.. 

That's not true ... crunch the numbers. I've done the math and I can save not only money but server space as well. True I may not get the same user count per server (the so called ESX tax), but I'll definately save more money and space and heat using VI3.

your statement is only correct for para-visualized clients ie modified Linux guests, for those OS such as Windows the performance hit is much the same as with ESX, but with the added issue of a Host OS together with the XEN Source on top. therefore performance will most likely be worse on a when running Citrix on a XEN server as opposed to Citrix on a VMware ESX server. much more akin to Virtual Server running on a Linux Host.

There is a diference between Published Application and Remote Desktop, We were using a complete citrix enviroment (Remote Desktop) citrix 4.5 and then evaluated ESX 3.0 togather with citrix. We found citrix works well in an ESX enviroment slightly less users per box than a hardware solution (to be expected). ESX excels as a backup/DR solution or for sandboxing / underutilized servers however if your citrix enviroment is allready on limit ESX will not be a solution. In the end we went for a hybrid Physical Citrix servers with back up on the ESX farm, several under underutilized servers also on the ESX farm then we virtulized everything DC's Mail, DB's file servers etc... this proved to be the best solution reducing the number of physical machines but not performance. during a disaster recovery we would have less preformance for sure, but it is allways available. Hope to have been of asistance in this complex question.


XenServer (and Xen) doesn't have a host OS -- it is every bit as much a bare metal hypervisor as ESX.  It does its I/O via a fast memory-mapped channel to the control domain instead of a fast memory-mapped channel to a vmkernel -- bit the path is the same length, and the architecture is as bare-metal.

(And on the other point: use of hardware assist for Windows is NOT like VMware's emulation. Hardware assist is actually much more like paravirtualization in its simplicity -- the only reason it does not yet show speedups over VMware emulation is that the hardware assist cycle counts are high.  As each generation of new silicon brings reduced cycle counts, the advantages of the approach should rival those of paravirtualization.)


So does that mean you can run XEN without Linux?   That's amazing!   Where do the device drivers come from?  OpenSource? Is there a certification programme for XEN drivers?   Does a driver certified for Citrix XENServer mean it's certified for Novell XEN, Oracle VM (XEN), Redhat (XEN), Virtual Iron (XEN)?   Does the same VM work across all these XEN variants without change?   If I need to change the VM, what else is different between all these variants?   Are all the commercial XEN distributions handing back their IP to make XEN better?  Or have they forked XEN, to compete against each other and retain a competitive advantage?   The fast memory mapped channel to the control domain sounds great, what is the control domain? If XEN is bare metal, where does the Control Domain sit?   Could it possibly be that the Control Domain is actual Linux and therefore all I/O is going through this domain and through Linux device drivers that are installed here, if Linux is installed, and it's loading the device drivers to talk to the underlying hardware, then surely XEN is sitting on top of Linux?   hmmmmmm   I'm now confused....    Seems the architecture isn't quite the same.... If that's not the same, then may be there's some performance, reliability and security benefits of certifying drivers for virtualisation, removing the extraneous code, optimizing them for a virtual world and putting them directly in the kernel and putting that kernel on the baremetal without another operating system to help it along.  Getting rid of that management VM (Service Console) was the best thing VMware did, I got fed up with all those Linux Security patches we had to deploy, wonder if that control domain XEN has will have to be patched as often?  I guess only time will tell....    I wonder if Microsoft adopted a similar model for HyperV.... oh that's right they did.... hmmm their Control Domain must be huge, that's going to be fun on patch tuesday.   

We are contemplating using a Citrix front end (we're a Citrix TS shop) & a VMWare 3.5 back end (we virt servers on ESX).  Has anyone done this?  We want to leverage the ICA protocal & keep the VMWare agility with HA, DRS, VMotion, etc.