NVIDIA announces new virtualizeable GPU to power high graphics VDI

Last November I wrote an article about NVIDIA's "Monterey" project where they were researching how they could use GPUs to enhance the VDI remoting experience.

Last November I wrote an article about NVIDIA's "Monterey" project where they were researching how they could use GPUs to enhance the VDI remoting experience. They talked about multiple goals, including (1) using GPUs to do super fast, high quality, hardware-based encoding of the remoting protocols for general desktop users, and (2) providing "real" GPUs to VDI virtual machines so users can use any app that requires a GPU.

Fast forward to this week at NVIDIA's GPU Technology Conference (that Jack attended)—NVIDIA announced the results of this effort, to be known as the "VGX" platform.

At it's most basic level, VGX is two things:

  • A physical plug-in card for servers with a new GPU called "Kepler"
  • A hypervisor software component that will plug into Xen and vSphere, and (eventually) Hyper-V
NVIDIA ultimately believes this can help deliver better remoting experience to all users, but initially they're going to target it towards high end workers designers for whom VDI was never an option in the past. They're not going to build their own protocol, rather they're creating an H.264-based pixel stream that would be transmitted via HDX or PCoIP. (Citrix's Derek Thorslund  blogged about how this will work with XenDesktop.)

Kepler card for VDI

The VGX plug-in card (above) has four of the new Kepler GPUs (which allows them to maximize the memory they can use for the frame buffer which is their current performance limitation). This board only consumes 150W (compared to the Tesla which is 225W). Each of the GPUs has 32 work queues which is how they can support up to 128 VMs per card—previous GPUs only had a single queue which is why only one VM could use the GPU at a time.

Then in the hypervisor, NVIDIA is working with the hypervisor vendors to write a GPU component that will make a "real" GPU visible to each VM. NVIDIA will also supplies the graphics drivers that run in the guest of each VM, much like how they provide the graphics drivers for Windows on physical hardware today. (This alone is a pretty cool thing, because today's HDX & PCoIP have drivers written by small teams at Citrix, VMware, and Teradici. And while those teams have done a great job, NVIDIA has thousands of employees working on this.)

Another interesting thing about the GPU access from the VM is that you'll be able to load different types of drivers to do different things with the GPU. For example, a knowledge worker who mostly uses Office and web browsers doesn't need the same type of GPU power as someone who's working in Photoshop all day. With the NVIDIA VGX card, those users will be able to sit side-by-side on the same VDI server, with the graphics drivers in the Photoshop users VM getting access to a different "GPU personality" than the regular worker. (I assume they'll be able to integrate with the connection broker to use this information when load balancing, etc.)

In terms of numbers of users per card, they're not ready to share specifics, though we know the work queue limit means that they can get a maximum of 128 VMs per card. They're thinking they'll probably have about 100 users for regular knowledge worker VDI (which again is limited by the amount of memory on the card). For intense graphics designers, that might be more like 4-8 per card. (But again, it depends on the app, the number of displays, etc. Basically you have to consider that this card has four GPUs. How many Photoshop users do you want to put on a single GPU? Maybe only one? Two?)

The bottom line with this VGX platform is that NVIDIA is looking to offload a bunch of stuff that's currently done on the CPU and with system memory. They mentioned again how fast everything is with the VGX hardware. (In fact they said that they can actually get the H.264 encoded pixels to the NIC faster than a typical GPU gets the pixels to the DVI cable in a traditional desktop.) The Kepler GPU is 28nm, and it has twice the performance per watt versus the previous generation GPUs.

Everything NVIDIA told us about this looks cool except for one thing—in addition to buying the hardware, they will also have a per-user, per-year license to use it, AND they're creating a f*ing license server to manage this!?!?!

Seriously???

First, it's too bad that this is going to be another cost on top of everything else. But I guess that's what the free market is for. (So I'm upset about that part but I understand.) But for the license server? Oh man!! IT pros have never liked license servers. It just makes us feel like criminals until we "prove" we're innocent. But even worse is that license servers can be single points of failure. Remember the recent problem with Citrix's VDI-in-a-Box? Or when you couldn't reboot ESX servers for two days because VMware had some expired license key?

Seriously, I get why they want to charge per user per year. (Well, actually that sucks too, but I get it. They have dollar signs for eyes.) But the license server? Ugg!! #fail

Moving on, I guess all that happens now is we wait for the products to come out. NVIDIA said that Citrix would be first to market with this, followed by VMware. As for Microsoft, who knows? (Though they mentioned that Microsoft is excited about this. They were initially worried they'd see this as a competitor to RemoteFX.) The Kepler plug in card will be available this year, they should have beta versions of the hypervisor components late this year, with everything shipping in 2013.

9 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close