NVIDIA announces new virtualizeable GPU to power high graphics VDI - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

NVIDIA announces new virtualizeable GPU to power high graphics VDI

Written on May 18 2012 10,510 views, 9 comments


by Brian Madden

Last November I wrote an article about NVIDIA's "Monterey" project where they were researching how they could use GPUs to enhance the VDI remoting experience. They talked about multiple goals, including (1) using GPUs to do super fast, high quality, hardware-based encoding of the remoting protocols for general desktop users, and (2) providing "real" GPUs to VDI virtual machines so users can use any app that requires a GPU.

Fast forward to this week at NVIDIA's GPU Technology Conference (that Jack attended)—NVIDIA announced the results of this effort, to be known as the "VGX" platform.

At it's most basic level, VGX is two things:

  • A physical plug-in card for servers with a new GPU called "Kepler"
  • A hypervisor software component that will plug into Xen and vSphere, and (eventually) Hyper-V
NVIDIA ultimately believes this can help deliver better remoting experience to all users, but initially they're going to target it towards high end workers designers for whom VDI was never an option in the past. They're not going to build their own protocol, rather they're creating an H.264-based pixel stream that would be transmitted via HDX or PCoIP. (Citrix's Derek Thorslund blogged about how this will work with XenDesktop.)

Kepler card for VDI

The VGX plug-in card (above) has four of the new Kepler GPUs (which allows them to maximize the memory they can use for the frame buffer which is their current performance limitation). This board only consumes 150W (compared to the Tesla which is 225W). Each of the GPUs has 32 work queues which is how they can support up to 128 VMs per card—previous GPUs only had a single queue which is why only one VM could use the GPU at a time.

Then in the hypervisor, NVIDIA is working with the hypervisor vendors to write a GPU component that will make a "real" GPU visible to each VM. NVIDIA will also supplies the graphics drivers that run in the guest of each VM, much like how they provide the graphics drivers for Windows on physical hardware today. (This alone is a pretty cool thing, because today's HDX & PCoIP have drivers written by small teams at Citrix, VMware, and Teradici. And while those teams have done a great job, NVIDIA has thousands of employees working on this.)

Another interesting thing about the GPU access from the VM is that you'll be able to load different types of drivers to do different things with the GPU. For example, a knowledge worker who mostly uses Office and web browsers doesn't need the same type of GPU power as someone who's working in Photoshop all day. With the NVIDIA VGX card, those users will be able to sit side-by-side on the same VDI server, with the graphics drivers in the Photoshop users VM getting access to a different "GPU personality" than the regular worker. (I assume they'll be able to integrate with the connection broker to use this information when load balancing, etc.)

In terms of numbers of users per card, they're not ready to share specifics, though we know the work queue limit means that they can get a maximum of 128 VMs per card. They're thinking they'll probably have about 100 users for regular knowledge worker VDI (which again is limited by the amount of memory on the card). For intense graphics designers, that might be more like 4-8 per card. (But again, it depends on the app, the number of displays, etc. Basically you have to consider that this card has four GPUs. How many Photoshop users do you want to put on a single GPU? Maybe only one? Two?)

The bottom line with this VGX platform is that NVIDIA is looking to offload a bunch of stuff that's currently done on the CPU and with system memory. They mentioned again how fast everything is with the VGX hardware. (In fact they said that they can actually get the H.264 encoded pixels to the NIC faster than a typical GPU gets the pixels to the DVI cable in a traditional desktop.) The Kepler GPU is 28nm, and it has twice the performance per watt versus the previous generation GPUs.

Everything NVIDIA told us about this looks cool except for one thing—in addition to buying the hardware, they will also have a per-user, per-year license to use it, AND they're creating a f*ing license server to manage this!?!?!

Seriously???

First, it's too bad that this is going to be another cost on top of everything else. But I guess that's what the free market is for. (So I'm upset about that part but I understand.) But for the license server? Oh man!! IT pros have never liked license servers. It just makes us feel like criminals until we "prove" we're innocent. But even worse is that license servers can be single points of failure. Remember the recent problem with Citrix's VDI-in-a-Box? Or when you couldn't reboot ESX servers for two days because VMware had some expired license key?

Seriously, I get why they want to charge per user per year. (Well, actually that sucks too, but I get it. They have dollar signs for eyes.) But the license server? Ugg!! #fail

Moving on, I guess all that happens now is we wait for the products to come out. NVIDIA said that Citrix would be first to market with this, followed by VMware. As for Microsoft, who knows? (Though they mentioned that Microsoft is excited about this. They were initially worried they'd see this as a competitor to RemoteFX.) The Kepler plug in card will be available this year, they should have beta versions of the hypervisor components late this year, with everything shipping in 2013.

 
 




Our Books


Comments

Phil Dalbeck wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Fri, May 18 2012 10:25 AM Link To This Comment

God dammit - can't anything be simple anymore?

Things are bad enough with Microsofts ridiculous VDI licensing on top of non intuitive and overpriced VDI platform costs (VC Ops manager for View for instance?) without us having to worry about another license platform and license server for the flippin' hardware components.

Whats next - Cisco charging me per port per hour to enable special QOS packet cutthrough to lower latency for my VDI machines? (Perhaps I shouldn't tempt fate...)

Complete nonsense. Oh well, perhaps AMD will step up and enable virtualisation of GPU Stream processor allocation on a server version of thier APU's and thus lets us buy a server that can do everything without expensive optional addon cards - as it seems Nvidia are aiming at the niche and not making a "control the market" play.

Gabe Knuth wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Fri, May 18 2012 10:38 AM Link To This Comment

Maybe the per user/year and license server model is less about delivering virtual desktops and more about cloud gaming or something? I just have to think it goes beyond our little niche.

Of course, they did talk about it in the context of VDI, but it seems to me they wouldn't put this much effort into something without having a bigger picture in mind.

Phil Dalbeck wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Fri, May 18 2012 10:51 AM Link To This Comment

True that.

Perhaps we'll see a few varients of the VGX technology aimed at differing markets.

Honestly - I'd like so see a buy-once, drop in card that:-

A) Gives x number of VM's in a host the capabilities of a low end dedicated GPU - i.e. basic but extremely beneficial standards based 2d/3d acceleration (to remote the CPU hit of video playback etc). Most VDI users dont need loads of 3d power - but they do want hardware accelerated video playback and 2d acceleration comparitive to that found in even the cheapest integrated desktop chipsets.

B) reserves a certain pool of stream processors/worker threads to accelerate the compression of the desktop display streams on behalf of the the hypervisor (basically like the Teradici APEX cards do for PCOIP streams).

There is no reason both of these functions couldn't be carried out by a single Keppler style card via a single software API surely?

edswindelles wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Fri, May 18 2012 10:53 AM Link To This Comment

Another potential issue is the form factor: dual width PCIe.  A lot of VDI implementations might not accommodate this.  I know our blades wouldn't.

vgernyc wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Fri, May 18 2012 11:16 AM Link To This Comment

I'm a bit annoyed that Terminal Server and XenApp are kind of red headed step children of Microsoft and Citrix.  I would like to see full DirectX and OpenGL support.  Would love to see HDX 3D Pro on XenApp.

I guess AMD lost it's drive?  They're not really looking to keep up with Intel anymore.  Are they not reacting to this either?  Nvidia has been working on this for a while. Tisk, tisk.

Mark.cube wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Sat, May 19 2012 4:02 AM Link To This Comment

Interesting. Now the GPU makers are ripping in the potential Virtualization market.

SillyRabbit wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Mon, May 21 2012 9:33 AM Link To This Comment

@vgernyc - There is hardware acceleration and graphics acceleration, both promising to do the same thing: Improve performance of centrally delivered applications.

Can anyone else predict how this market will evolve?

Will the switch companies create an infrastructure switch that dynamically virtualises a session upon the detection of a new connection? This would truly be a backbone switch!

Will computers no longer require harddisks since everything will be provisioned in memory (e.g. http://goo.gl/pJ2yW)?

What do BM readers "really" want?

Scott Cairns wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Thu, May 31 2012 9:48 AM Link To This Comment

Just catching up on some news and read this. All good points made, but the thing i can't quite get my head around is the 'per user per year' licensing.

Based on the article, as written, I can understand the logic behind the density on the card and how different use cases would operate, so 100 knowledge workers but only between 4 and 8 'intensive' photoshop users. Makes sense.

So now to move my (hypothetical) art department (4 guys) onto VDI, i need to use my standard VDI solution, but also buy:

1 x VGX card

4 x VGX licenses

2 x license servers (as you will need resilience)

2 x server maintenance bundles (support + KVA etc)

Nah, they can stay on their physical desktops :-)

The interesting thing is that I would buy 4 licenses for my art department but for my accounts department (100 users) i would need to buy 100 licenses.

There must be a missing piece to this puzzle as it wouldn't make sense that for a 100 user team you would pay 25 times more for the privilege than an art team of 4 users. You are using the same compute power.

I guess we'll just have to wait for the annoucements unless you guys can dig a little deeper.

Richard Seepaul wrote re: NVIDIA announces new virtualizeable GPU to power high graphics VDI
on Thu, Jun 7 2012 10:10 AM Link To This Comment

This is directed at

Whats next - Cisco charging me per port per hour to enable special QOS packet cutthrough to lower latency for my VDI machines? (Perhaps I shouldn't tempt fate...)

CIsco Supports SIP with thier VoIP Product BUT charges license fees for using SIP so you have to pay a Cisco License fee for using SIP Phones (SCCP Phones have no added license costs but of course cost more. (Message - It pays to use our proprietary "lock-in" protocol.)

MS forces you to use their Directory Service why is it OK or good for some and not others?

If NVIDIA can sell this KOOLAID then why not, the benefits of virtual monopoly.

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.