Benchmarking Qumranet (KVM) vs. Citrix XenDesktop vs. VMware ESX for VDI. Help us design the tests.

Qumranet made a big splash at BriForum last month. Those of you who were there saw a demo of their "spice" remote display protocol, which showed a four-monitor Windows XP desktop running remotely with full motion high def video, skype, IE.

Qumranet made a big splash at BriForum last month. Those of you who were there saw a demo of their "spice" remote display protocol, which showed a four-monitor Windows XP desktop running remotely with full motion high def video, skype, IE... the works. Spice is one of the most amazing things I've seen in this industry in a long time. It's 100% software-based and available now as part of Qumranet's "Solid ICE" VDI product.

If you've never seen spice, here's a short little video I shot of it when I visited Qumranet's office a few months ago:

Make no mistake: Spice takes bandwidth. A lot of bandwidth. Just how much bandwidth depends on how many screens you have and what you're doing. But obviously if you're watching full-motion high-def video that in it's compressed codec state is a few gigabytes in size, this isn't quite going to work over a 20kpbs connection. Office apps could easily take "normal" amounts of bandwidth, but with four monitors, high-dev video, skype, and the works, we could easily get spice up over 100mpbs.

(Perhaps this is a conversation for another day, but personally I'm fine with this. Spice is a LAN solution for environments that need more than what ICA or RDP can do. And if you need this, then you understand that bandwidth is important too. Of course if you go down this route, you probably already have 100 meg or gigabit switched ethernet to your desktops. And finally, yes, spice really is a LAN-only protocol. In fact if you make a Solid ICE connection over a WAN, Qumranet thinks, "well, since you have a WAN, you are already valuing remote access more than performance," and they just drop down to using vanilla RDP.)

But the point is this article is not about spice. The point is that in addition to using the spice protocol, Qumranet's Solid ICE product uses that "KVM" hypervisor instead of Xen or VMware or Hyper-V. (Remember last week's little conversation about KVM?) Qumranet feels that KVM is a better hypervisor for VDI environments than anything else on the market, and they've run some tests that they feel prove it.

The problem is that as a vendor, people won't really trust Qumranet's benchmarking results since obviously Qumranet would have a lot to gain if KVM is more efficient that something else.

Therefore, Qumranet has hired Gabe and me to conduct performance tests of their product and to compare it to other leading VDI products. Gabe and I will be doing that work next week, and we're spending this week putting together our test scripts and plans.

We've published performance results in the past, and inevitably someone posts a comment like "Your results are crap because you didn't do xxxx." Or "why didn't you configure xxx option."

So this time, we're turning this model around. We're inviting the entire community to review our plans, and we hope to address any perceived problems ahead of time.

Background information about this performance test project

First, just to make sure there is no gray area, Qumranet is paying us to conduct this test and to publish the results.

Second, we are going to test Qumranet's Solid ICE product, Citrix XenDesktop (using XenServer as the VDI host), and VMware's VDI solution.

Since Qumranet is paying for the test, they will obviously provide a engineer or two for us during the testing who can answer any questions that might come up. But again, we want to make this test as fair as possible, so we contacted Citrix and VMware and let them know what we're up to. Both Citrix and VMware have agreed to make engineers available to us during the testing as well.

Also, many of you are probably aware that VMware's EULA does not permit public disclosure of benchmark or performance testing without prior approval. Part of me wants to say "F You" to that and publish our results anyway, but part of me wants to do the "right" thing and try to get pre-approval from VMware.

I talked to my contacts over there, and it turns out that this whole pre-approval thing is pretty easy. Basically we told them what we were doing and why, talked about our scripts and our methods and stuff, and agreed to provide them with the full results, and they were cool with it.

And for us, this is something we wanted to do anyway, since we want Citrix, VMware, and the whole community to view our tests as "valid."

That said, let's take a look at what we're planning on doing.

Our testing methodology

For this project, we're testing the efficiency of the hypervisor when it runs VDI loads. VMware has their VMmark standard benchmark test suite, but unfortunately that is for server workloads only and does not include VDI use cases. We'd also love to use Login Consultants' LoginVSI suite, but that's still in beta and only available for TS environments currently. (Plus there is some work they need to do on randomization which I'll talk about later.)

The bottom line is that we're pretty much on our own so far as building the test environment.

Our fundamental idea is that we'll do this in a way that's more-or-less similar to the way we do terminal server tests. We'll write a AutoIT script, run it in a whole bunch of Windows XP VMs, and then just see how many VMs we can throw on a box before the performance gets too bad.

The only real "catch" here is that we really want to simulate the "randomness" of real-world desktop users. Today's hypervisors do a really great job of caching and memory sharing and all kinds of things, so if you have 50 users in 50 VMs all running the exact same script, your lab tests will show user densities a lot higher than what you can get in the real world. So we want to write scripts that have different users doing different things.

We feel the easiest way to do this is to create small activity "modules" which we can re-user and re-combine to create our user scripts. We want to create maybe 50 or 100 different modules. Some modules will be simple, like opening notepad, jotting down some notes, saving the file, and closing notepad. Some will be more complex.. loading large word docs, find and replace, spell check, embed OLE Excel chart, etc.

Then once we have all of our modules built, we can just run them all in random order in each VM.

Now of course we need to be able to run the exact same tests on all three platforms, so we can't randomize at run time. Instead, we need to "pre-randomize" our scripts so that we can run the same script modules in the same order every time we run the test. Ultimately we're planning on creating a script for each user with a random selection of modules. Then we can run these scripts in the various VMs and get our random experience, but still have the same randomness from test-to-test.

We're also planning on randomizing the reaction speed of the users. AutoIT allows us to set global variables that specify how fast a user performs their actions after a screen pops up. We'll randomly configure each script for something in the 1/4 second to 3 second range.

As our scripts run, we'll have them dump out the run times of each module to a CSV log file. When we're done we should have a huge log file with thousands and thousands of module run times (as well as the corresponding data like what the user's reaction time was as well as how many VMs were running on that box.)

What we will make public

Since Qumranet is paying for this project, they want us to summarize our results and write a few pages and make some nice charts about our findings. However, we think there is a lot of value for the community at large, so we are also committed to releasing all of our test scripts and raw data, including:

  • All the individual activity modules
  • Our "script to create the scripts" that will create our 100 (or whatever) "pre-randomized" scripts
  • The 100 individual user scripts
  • A complete description of our Windows XP setup so that you can get these scripts running in your environment
  • A full dump (Excel, CSV, SQL? dunno yet) of our raw results, with every module for every user and the timing for all three platforms.

We understand that we're not trying to create a new industry benchmark. Instead, we want to say "Here are some tests that we've done. Here's how we did them. Here are the results. Here's what you need to know to run similar tests yourself."

A few other notes about this Qumranet test

There are a few other specifics of this project that are probably worth talking about.

First, we're going to run these tests on "normal" hardware. Probably something like an 8-core server with 16GB RAM. This test is not about building the biggest server in the world to server the most VMs. It's about seeing how the various hypervisors perform on normal servers with real-world workloads.

Second, we're testing the hypervisors, not the protocols. A complete "solution" test would require analyzing the performance of the screens across the network and all sorts of things. In our case, we're just going to run the tests and time them from the server. (By the way, we're not even really concerned with CPU or memory utilization. We just want to run these modules and see how the overall "slowness" of the server is affected.)

Questions we have

So now that you understand our plan, what do you think? Specifically,

  • Do you think these tests are "valid?" If not, why not? What should we do differently?
  • What "activity modules" should we build? What apps should we install / simulate?
  • Is there anything special that we should do when we install each of the three products?
  • Is there anything else we're not thinking about?

Since we're running these tests next week, we really hope to be able to get the results published the week after that. I'm sure that's where the "real" comments will be made, but I'd like to "pre-address" anyone's concerns now.


Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Good luck for you.

Companies will put weight on result depending on their own need and goals. Some focus on graphcial performance, some on WAN access, some on manageability...

Probably good to get a series of group like :

  • Manageability
  • Performance : user point of view
  • Performance : administrater point of view
  • Resources (server, WAN, ...)
  • Cost

for which eveyrbody can after put their own value... 


Yeah I agree that this is the kind of stuff that's needed to make a decision about which VDI solution to buy.

I want to be very clear on the fact that we are not trying to do all these tests. We are not trying to recommend which VDI solution someone should buy. All we are doing is looking at how the Windows XP VMs perform on various hypervisors with a given set of hardware.



Congrats on having the guts to try an pull a test like this off. I believe this is the first time an independent, albeit vendor sponsored, 3 way test has been performed with the specific intention of documenting VDI performance. I remember seeing Ron Olgby's VMware vs. Citrix presentation from BriForum 2006 but there has been significant improvements to both products since then.

A few questions/comments come to mind:

- What are you going to be using for backend storage i.e. FC or iscsi?

- Will any of the applications be virtualized i.e thinstalled or will each VM have a local copy of the apps?

-Will all VMs have there own vmdk file or equivalent ?

-My biggest concern is how you are treating the protocol. You've comment numerous times on the performance difference between RDP and ICA. The very nature of the implementation requires a user to access the desktop via some remote protocol, in turn the protocol selection greatly effects the users perception of performance. For example if we have two VMs with exact same build running under VMware a user accessing the VM via ICA would most likely have a more responsive machine then the same user accessing it via RDP.

This is great but I don't see Virtual Iron in the test load neither do I see Provision Networks VDI solution with their multimedia redirection software.  Since what you are attempting is a mammoth task and what you are looking for is real world stuff minus the marketing waffle I suggest broadening the basis and also include the connection brokers and offerings in general.  VDI is still a niche but it would be good to see which vendor at this time is the better option or that is ahead at this point in time.  I am confident that if you ask VI and PN to join they will be more than happy to.  For me and my company we do consulting and it would be a matter of having a independent view on findings instead in trying to convince ourselves that VI and PN is the best option in what we are trying to achieve.


for the xp installation 

will/could you try with the vm guest swapfile (pagefile.sys) on localdisc (not san), on san with the vmdk and without swap file at all  ?


in one of the module a kernel panic error or something like to force the machine to make a bsod and reboot to know how long the dump/reboot will take on a workload charged system

yves deglain 


Hey Brian,

 The people i talk to about implementing VDI solutions seems to mostly concerned with the ressource requirement for "heavy" and graphical applications, for both Hypervisor, client and network.

Could be interesting to see those workloads with meassured effect on both hardware and network.



forgot to mention as the test if finaly about hypervisor perf in vdi workload why don't you test PN virtual access suite vdi on hyper-V and sun VDI

Might be interesting to also include everyday overhead and management tasks, such as user logon/logoff, O/S upgrading, virus checking, etc.

I think you need to consider when your test results become significant. Let's say you create 10 sets of pre-randomized scripts and let them run on the different platforms. You combine the results and summarize. Is this significant or do you need to re-run the sets, let's say 3 times on all platforms, first compare the results against a standard deviation you previously definced, then combine and summarize the results? Or is running one set of 100 pre-randomized tests significant?

Am looking forward to reading about the test setup and results.




Here is another "Please, please, include me!"

 But if you're comparing the intergrated VDI solutions (XenDesktop on XenServer, VDM on ESX, Qumranet on KVM) then I would also be interested in how VAS from Provision Networks in combination with Virtuozzo would compete. Although they are not 1 vendor (yet), they did make a deal that they will offer their solutions in a bundle. I would bet that performance in relation to the price it would come out best.


I would assume you have limited time and money to run these tests, so for everyone asking you to add more to your plate is a bit unreasonable.

I think you should just use the basic Office applications, Acrobat Reader, IE, etc.  There is no way you are going to saticfy everyone, although it was nice that you asked for input.  If you can provide some generic data that everyone can use as such, that would be great. 

If you want to add a heavy graphical user into the mix since this will be a resource hog in VDI (all rendering done in software) you could just use a 3D PDF file, thus you don't need to add another application to your list.  You can get some sample files here:  I would suggest using the "Flyover of a digital terrain model
" since it is automated.

 Just some thoughts, thanks and good luck.


Consider the kind of results I used to show in those briForum "Perceived Performance" sessions.  For your test, I would expect to see something like "time to complete" versus the number of loads on the physical hardware.

Having two different kinds of loading scripts would also probably be more fair (and enlightening).  One set that would emphasize CPU performance, and another that would emphasize IO performance (not sure if I mean file or network?).


Hi Brian, I would also second this - I'm seeing lots of interest in VDI from Education but by their very nature they are also a heavy user of Graphics and Video - I'm sure this would also be the same in the US and EMEA?

Good Luck

Interesting ideas definitely... However, Virtual Iron runs on Xen, and the Provision stuff is protocol-related, and we're testing the hypervisors and not the protocols (in this case). However, since we'll make our whole test scripts available, maybe you can run some additional tests and share your results too?
Yeah, Virtuozzo would be interesting, as would Terminal Server. To be honest it's going to come down to time. We won't have a huge amount of time to test more than three setups, although maybe that could be something that someone else from the community could do?
AWESOME!! These are really cool. Nice idea.

I would keep the portion that makes VDI what it is, that being the display remoting.  I'm not speaking to user-experience surveys, but the impact to the guests and thus hypervisor that including a display can have, as it will directly impact your peak load and your scaling, is desktop related, and also specific enough where you could get closer to apples-to-apples.

For example, if one of Qumranet's capabilities is the ability to offload processing to a client and VDM 2.1 and/or Wise TCX can offload some processing for some codecs to the client and outside a channel, that will directly impact your scalability and peak load.

To me, these would be some items that would maintain a VDI slant.  If it's truly "hypervisor-only", but with more "typically desktop" application loads, that's fine -- but ends up with too many what if's on application types, etc.

Even with a medium load, and running several scenarios for which the desktop display factor is a performance impact, it's easy enough to see how much this can vary your scaling.  But on a regular PC, run a good-quality video stream from a flash-based player, and watch your CPU sit at 20-50%.  Do it on a somewhat constrained VM, and watch it sit at 95-100%.  Now have a bunch of people do that at around the same time.  (ever see a Corp Comm. link to a video, or a ton of people look at the same news item at the same time?)

(BTW, if anyone out there is able to offload to a client today, for streaming media in Flash, please speak loudly and clearly, I'm listening.)


Hi Brian,

with reference to how many vms you can throw at it, would you consider using XenDesktop with Provisioning Server so that you can have the single "golden" image streamed to multiple "diskless" virtual machines?

what type of SAN storage would you be using?

would you be simulating load balancing/fail over or DR (eg. VMotion, XenMotion etc)

personally I think the end user device would also make difference to these test also. eg. Thin Client devices (HP, IGEL, WYSE - running LinuxOS or XPe) etc as well as protocol (SPICE, RDP & ICA would have differences with supporting things like USB devices etc). 

But I understand that you are testing just the load on the Hypervisors at this stage :) Anyway, can't wait to see the outcome...Citrix rocks!!! (I'm biased..sorry!)





This is awesome...I've been hounded by QN sales guys since I met them at BriForum 2007, to the point where I made mention to it to Navin at this years BriForum.  I think they have a real interesting product with a lot of unanswered question on performance.  I was told by their sales guy that they don't do "demos" however they were handing out USB sticks with demos at BriForum.  I already have a bad first impression just based on the sales guy I've dealt with and my biggest issue was that I don't know anyone who's using or even piloted their product - this is a great!!!

One of the things that I was told about QN is that they have a feature similar to linked clones in VMWare Workstation that you can have a VM template and clone all your other VM's and only have delta changes to reduce storage requirements, I'd be interested to see how well this really stacks up.

A few users have already mentioned storage, so I'll contribute to that as well.  Being that it's Linux based I'm assuming the storage of choice would be NFS.  A lot of people have had problems getting iSCSI to work with Xen so I'd be interested in how easy (or hard) is it to get iSCSI working

Lastly, it is Linux seeing how the deployment of QN would be interesting especially if you look at it from a linux-fearing Windows administrator point of view.  Personally I like Linux but there are quite a few people out there that have a gripe with ESX because they think they need to know Linux to install and maintain it. 

Would be good to understand the end user experience, i.e. bitmaps redrawing quickly, GDI heavy apps to see how they hold up. If X64 can be tested good to understand the difference there as well

First, suggestions for tests. What about "content creation", i.e. the Adobe suite(s) or equivalent? I presume CAD is irrelevant on grounds of accelerated 3D performance, but maybe som light-to-meduim 2D work from AutoCAD LT? We have a lot of such usage on Citrix and TS today.

And maybe some typical finance-suiite-with-SQL-backend. I don't know what would be more relevant in the US - SAP, or one of the MS Dynamics? 

Then even if it's probably implicit: I hope you'll publish grouped result: "MS Office", "Task workers" (office + other lightweight stuff), "Graphics intensive", whatever.


Hi Brian

I was just wondering if one of the metrics you would record would be logon time, especially if you planned to use standard roaming profiles - I think it's a critical measure of how the performance feels to the users, especially in a hotdesking environment... 

Also, perhaps it would be relatively small effort for you to include a test of browsing a single webpage (like or, something with a few pictures?




Agree with this, and here are a few notes about potential flaws in your 'server usage' only tests of 3 different hypervisors.

One thing that you are missing is the fact that you are inadvertently testing the VDI protocol.  Not from the perspective of network bandwidth and user experience at the client, but from the perspective of the Protocol CODEC encoding load on the server itself. Maybe this is a different series of results based on the same tests, for another discussion, but I don’t know how you are going to divorce the network bandwidth and client user experience from the equation – the only thing you can do is try to keep them constant.  So for this set of tests:  Ensure the network bandwidth is fixed (limited to some maximum value) – say GE or 100 Mbps / server.  Ensure the user experience of the average user is similar for each of the 3 solutions – a much harder metric to quantify. For the user experience if you are using the same protocol for all 3 (don’t think you are since it will probably be SPICE vs ICA vs RDP) – I believe you could come up with some sort of scale (1-5 – where 1 is unusable and 5 is no difference from desktop) and then have a panel of judges (2 to 3) where you get an average score (works for the Olympics).  You don’t necessarily have to do this for the entire suite of tests but some spot tests and then interpolate from there. From there you have the choice of biasing the server loading results based on user experience or just posting which one did better and let the reader decide what is more important.

As for Offloading to a client - Teradici solution doesn't offload to the client, but does offload the video/audio/usb from the cpu to hardware that sits in the host.


This is a great project, congrats on pulling this off.

Since there is much talk about multimedia user experience, I think you definitely should include some VoIP stuff in there. And since I'm the first to bring this up <g> may I shamelessly suggest Office Communicator 2007?

Oh, and before somebody mentions Skype: yes, it's easy to install, but no, it's not an enterprise class application, regardless of Qumranet showing it in their demo. And it's the enterprise where the want to push their solution, right?

OC with two-way audio and video is where the beef is for me. Althoug, I understand the difficulties of casting this into a physical benchmark setup.

Just my $ 0.2,



By using only 16GB, this test is going to give the impression to the customers out there that you can only get a limited amount of users on a box. No customer would deploy a 8 core box with only 16GB for VDI. Can you bump that up?


All the input is great, but I think people are forgetting that this is a structured test that needs a controlled set of variables and definitions around what is being measured and how it is being measured.  What some people are suggesting is more like an extended performance analysis involving several use cases and more variables.

Brian, I would keep it simple and straightforward, just as you have outlined so far.  The more the test tries to be all things to all people, the less valuable it will be.  Make this test about the hypervisors and save the other stuff for another test.



Any chance on testing Vista?


Hi Brian,

Definitely something I look forward to, as someone who tried to run simple performance tests to compare Xen and KVM (results are publicly available at

My advice would be to not try to create a reference setup mimicking a typical VDI production environment in a large business, but instead to keep the setting simple and easily reproduceable with no need for over-expensive hardware (i-e no SAN, just plan local drives; an easily available brand/model of server; basic network equipment, etc), since you plan on providing the scripts and tools you will use.

This will allow third parties such as university to easily recreate and validate your results if need arise/there's interest for that without the needs for massive hardware investment. That's probably the best way to guarantee that the tests are truly independant and reproduceable.




Not being particularly technical, I don't understand a lot of this so I'm going to hazard a very-ill-educated guess at the ranking. Don't put any money on this though...

1. Citrix
2. Qumranet
3. VMware

Reasons? I'm hoping the slick, slimmed-down nature of the open source hypervisors as well as the basic way they virtualise servers (paravirtualisation vs. emulation etc.) will play to Citrix and KVM's advantage.

Please don't slate me too much if I'm completely wrong.

iSCSI on Xen is pretty easy.
What add the Randomization component.  If you run enough samples (which you will) the random factor is removed anyways.  Statistics 101.  There is no point in adding addtional variables to a test that are not necessary and variables that you do add should be controlled and not random.

Assuming the hardware will all be the same for each solution, can you incorporate licensing costs of each solution and provide that as one metric in the performance comparison. Great to know I can run 5% more VMs on solution X, but if X costs 50% more than Y, why not buy Y.

Also, would be interesting to see XenDesktop on Hyper-V. Let's see how well Microsoft's new hypervisor stacks up against the rest of the competition.

You mean Provision Networks on Hyper-V since it offers meaningful integration!

Brian - I am curious as to how you are going to tackle the differing feature sets offered by the three products. For example a complete XenDesktop installation would utilize Provisioning Server for OS streaming and XenApp for application virtualization while VMware would use VM cloning and ThinApp technologies. I am not as familiar with Qumranet but do not think it has a similar technology (I could be mistaken). This could have a big effect on the amount of concurrent VDI VMs you are capable of running as the application workload would change drastically. Performance information such as boot time of a new VDI image, application launch time, making administrative changes overall impact on system memory and required space would most definitely impact those included features.


But on the other hand it dilutes the validity of the tests concerning pure hypervisor performance because you are no longer comparing apples to apples. Of course without those features enabled you are potentially crippling a feature rich product to put it on par with a lesser featured product. XenDesktop, for example, allows the provisioning of a VDI image to a blade PC as well as a VM. That would most definitely skew performance information but ignore the hypervisor, which is the focus of your study. I guess an analogy would be comparing OpenOffice to Microsoft Office and leaving out mention of any product or feature not available in OpenOffice. Leave them out and you get a good idea between the matching products, put them in you see some extra value added by the commercial product.


Aside from that product footprint would be nice to compare. How many pieces-parts are involved in each solution, what is an anticipated cost to deploy them including storage space.


Here you again, making it sound like you've invented "perceived performance" and brought it to this community.


That would only be an issue if they are installed. I suspect for these tests it would be vanilla XP, so no ICA or RDP at all, This is the only fair way to test just the hypervisor and a good point.


David Lusty


I have been testing the virtual desktop solutions for about 1 -2 months. We have had a bad virus come through our office and do some weird damage, Also we have workstations that seem to keep getting slower and slower. I proposed to my boss, what do you think about getting one serve and hosting all employee desktops on it. That way we can lock them down and the performance would not degrade over time, like a normal PC workstation does.

We use alot of office and CAD packages. With that in mind, I tested XenDesktop on a Xenserver, Panonlogic with VMware, and Qumranet Solid Ice.

Xendesktop and Xenserver, to me, is the easiest to install and work with managing desktops and updates etc. The other thing which made it look good was I could use old PC's with a PXE boot and load up the desktop. The user would never know, that the desktop is local. As machines die, i just buy new cheap PCs and PXE boot and off you go! The bad side. It sucks with graphics and multimedia. other than that. The office and web surfing is great.

Next, was the Panologic, it was hard to setup even with an engineer on the phone. We did get it up and running. My boss liked the fact that it was more green. User would have a little box to turn on the system. Office was good and so was surfing. CAD sucked. Patchy with the screen. Provisioning the desktops from the server was ok. Xen Provisioning server seem easier and more on the fly. They did say in a few months they were going to have there own protocol and give an experience similar or more than what I can compare to what SPICE is.

I think SPICE was great! I think it was easy to provision new desktops. the hardest part was installing solidice KVM. But I managed through it. The Video was wow, and so was the CAD. CAD was good. However, when you would zoom into a spot, it would lock up alittle. And then lol it would crash the cad session. But it was the closest thing to a local user experience.

Now, I am looking at something called Terrradici. The one thing I do not know, Do I need to have a Blade PC to use this? I hope I can stream it out like panologic. lol.

For me, I am wondering if I should just wait a year and to see if there if a better stronger solution. What do you guys think?


At this time Teradici does require a Blade PC / Remote Workstation. It also requires a Teradici enabled client device. Unless your environment is very simple you will also need a Teradici compatible connection broker. Ericom's PowerTerm WebConnect supports Teradici (we demoed this at BriForum). Other connection brokers that announced support for Teradici are Leostream and VDIworks.
I second this request.... xp yawn.

May be not an issue but just to make sure.

Brian,have the vendors made any request as to which hardware should be used in these tests..?
I don't know if there is any data available on this subject but is it possible that a solution from a vendor runs better with hardware A compared to hardware B?
If so you might see different results when you would run the tests on a Dell server and then try the same tests on a HP server..
So in theory it would be possible to see Citrix become victorious on a HP server and VMWare when using a Dell server..

Just some food for thought...


...benchmarking developer apps (i.e. visual studio, .net, etc). i'd like to know how each of the hypervisors perform when using developer apps. i also would like to know the peformance difference in IOPS between using raw disk mappings vs the vendor's native file format (i.e. vmfs, etc). we've had issues with sql


...benchmarking developer apps (i.e. visual studio, .net, etc). i'd like to know how each of the hypervisors perform when using some developer apps. i also would like to know the difference in IOPS between using raw disk mappings vs the vendor's native file format (i.e. vmfs, etc.). i've noticed that sql calls perform better when used over raw disk mappings vs vmfs.

are there going to be any benchmark tests using outlook (specifically creating PSTs)? Or maybe client-server SQL based apps(or perhaps this could be included in an extended testing)?

just don't forget to open random youtube videos as one of the modules.
this passtime is now too popular in any office to be ignored!
Anything happening with this yet?  Really excited about the results!