TechEd Notes: Microsoft expects 40% increase in VDI User Density using Dynamic Memory & Hyper-V

At the TechEd 2010, Microsoft has been busy talking about things like Azure and Office365, SaaS and PaaS, and all things cloud. But in they keynote yesterday, there was a spattering of desktop stuff.

At the TechEd 2010, Microsoft has been busy talking about things like Azure and Office365, SaaS and PaaS, and all things cloud. But in they keynote yesterday, there was a spattering of desktop stuff. Nothing earth-shattering, unless you count a 40% increase in user density earth-shattering.

Speaking in front of stage mockups of some of Europe's most famous buildings, Microsoft's Brad Anderson stated just that during the opening keynote. A tall order, indeed! He went on to assert that "with this 40% improvement…we will have the highest density of VDI sessions in the market."


First, let's address the "40% improvement" statement. At first glance, this is pretty ambiguous. My initial response was something like "and you expect us to just believe that?" Add to it the fact that XenDesktop might have been involved, according to this post by Barry Flanagan at Citrix, and it becomes more confusing. It turns out XenDesktop was involved after the initial testing, but it appears to have been more about Provisioning Services than anything else.

Turns out Microsoft, specifically Michael Kleef and his team, did a fair job of documenting the process they used to arrive at that number. Using Login VSI, they ran the same tests on both HP and Dell hardware. The Dell setup consisted of 16 M610 blades with dual Hex-core Westmere processors, 96GB of RAM, and two local 500GB SAS drives. The blade chassis was also connected to a pair of EqualLogic SANS, one with 16 SAS drives and the other with 8 SSD drives. You can check the blog post to see more about the hardware used, but the bottom line is that it's all recent and based on the Dell Reference Architecture for virtual desktops. Even if you don't feel like it is a "typical" hardware solution, the key thing here is that all the tests were run on the same hardware.

With this hardware scenario, they were able to run 85 Windows 7 VMs, each with 1GB of memory without Dynamic Memory. After turning on Dynamic Memory and adjusting the amount of start up RAM for each VM to 512MB, they were able to run 120 VMs with the same workload. Since Dynamic Memory can pull memory from a pool and allocate it to VMs as needed, each VM averaged about 700MB of memory.

So, it turns out that the 40% number is a viable number, arrived at using an industry standard benchmarking solution. I'd love to see the exact same test done using ESX, though. It's the only way to prove the other big thing that Anderson said:

"We will have the highest density of VDI session in the market"

I don't even care if it's true--I love this.

The higher density of TS over VDI is one reason people aren't deploying VDI. Since hardware improvements benefit both TS and VDI environments, any density gains that VDI might make on TS have to come from the OS/VMM. If Microsoft is waving their arms and saying they have the highest density, you can bet VMware will follow suit and do what it takes to get back on top. A "density war" can only help in an industry that's been mostly fixated on remote protocols and OS support.

Still, as I mentioned before, I'd love to see the results of the same tests that Michael Kleef's team did, but with ESX as the hypervisor. It's the only way to tell if Microsoft is correct. In either case, a little one-upsmanship is a good thing.

Other things from TechEd

First, the world is so cool. I'm not in Europe this week. I don't even know where TechEd is being held, but I was able to watch the keynote. Watching keynotes online isn't a new thing, but it's still cool.

Here's what I took away from the keynote:

  • Brad Anderson said it's going to be easy to take advantage of RemoteFX and Dynamic Memory since they will just be there when SP1 comes out. That may be the case with Dynamic Memory, but unless I get a $1000 GPU or two with every SP1 download, RFX will take a little more work.
  • Michael Kleef did a demo of Dynamic Memory and RemoteFX over XenDesktop. All he did was connect to a desktop using RFX and show an app demo. It didn't have to be with XenDesktop (they could've connected with the RDC to show off RFX), but it appears that the relationship between Microsoft and Citrix is pretty tight.
  • The demo didn't actually talk about Dynamic Memory. Kleef showed how the GPU handled most of the work, then showed an app that was meant to make you forget you were watch something that was happening remotely. It worked, but I sort of wish they would've shown Dynamic Memory working.
  • Last thought on the demo: Sorry, Quest. You've actually added enhancements to vWorkspace that make RFX a viable protocol away from the LAN, but Citrix got the attention during the keynote.
  • Windows 2008 R2 SP1 (which means RemoteFX and Dynamic Memory) will be available in Q1 2011 now.
  • Anderson showed a slide that shows the market share of Hyper-V growing at a faster pace than that of ESX, but ESX's market share is still about double Hyper-V's.
  • The Windows Phone almost looks cool, I just have such a bad taste in my mouth from every other Windows mobile platform that I think I'll avoid it like the plague. Still, if someone sends me one, I'll play with it for as long as it seems interesting. If that actually happens, I'll be sure to share what I find.
  • Office365 looks useful, but are there impacts for VDI other than not having to include Office in a standard image or as a package?


Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Hey Gabe, when you said,

"With this hardware scenario, they were able to run 85 Windows 7 VMs..."

Is that per blade server? Thanks!


@Lance Hundt

Yes it was. And with dynamic memory we took it to 120 VMs on one blade. Then we scaled it to 8 blades with XenDesktop at 960 VMs.

Hope you dont mind me responding Gabe.


You can respond to my article about your testing anytime! If you feel the need to clarify anything, don't hesitate.


Ok then :) To clarify the Citrix involvement, it was more than just provisioning services. Citrix XenDesktop was involved end to end in the testing. We used the Citrix client to connect to the desktop delivery controller with provisioning services on the back end streaming the Windows 7 SP1 image. Hopefully that clarifies.


Ok, that's cool.

The way I see it is that XenDesktop had no part in getting the 40% increase in users. It's not like test 1 was run without XD and test 2 was run with XD, correct?

The benefit of XD was afterwards - shouldering the load the extra users using PVS, right?


This should bode very well for Quest vWorkspace. These guys have made a strategic bet on Microsoft Hyper-V as far back as 2-3 years ago, and it's about to start paying dividends. Now that Hyper-V is getting up to snuff, I suspect Microsoft will soon let the dog out by easing the many restrictions that have held back VDI.


Hyper-V R2 SP1 was primarily involved in raising the ceiling on density from a memory perspective. XenDesktop and Provisioning Services provided smoother disk IO and rapid scaling to 960 users - bear in mind that disk IO is the most critical factor in achieving density.

So at scale having XenDesktop there is pretty important - probably couldnt have scaled this solution without it.


Brad Anderson is a SCCM whore. Hyper-V, Remote FX, MDOP, App-V etc are all designed to lock you into a vertical stack. Same game as VMware. Bah, bah like sheep you all follow with your blind folds on and ears covered in vendor BS. Why would one bet on a vertical stack for a world that offers so much diversity?


Hyper-V dynamic memory is a balloning implementation.

ESX have a balloning, TPS, and memory compression.

It is a matter of time until someone shows ESX can do way better than that.

Also, this is a controlled test environment. The hot add memory part, and poor v1.0 balloning implementation of dynamic memory is likely to cause some trouble in real scenarios.