Clearing up the confusion around VMware & Nvidia's vGPU & vDGA DaaS announcement

Yesterday at the GPU Tech Conference in San Jose, VMware announced that they're adding vGPU support to vSphere and that they're bringing Nvidida GRID technology to their Horizon DaaS platform. I thought this meant vGPUs for DaaS, but that's not right.

Yesterday at the GPU Tech Conference in San Jose, VMware announced that they're adding vGPU support to vSphere and that they're bringing Nvidida GRID technology to their Horizon DaaS platform. I thought this meant vGPUs for DaaS, but that's not right. Reading the press release is like decoding a grammatical logic puzzle, but thanks to the help of three or four VMware and Nvidia folks here at the conference, I've gotten it straightened out.

The press release is starts out with, "NVIDIA and VMware Bring Graphics-Rich Virtual Desktops and Applications to Public Clouds: VMware Horizon DaaS Platform With NVIDIA GRID Technology Improves . . ." It goes on to say, "Coming Soon: NVIDIA GRID Virtual GPU on Virtual Machines."

Let's take a look at what this announcement is really about.

To understand it, we first have to dig into all the product marketing terms VMware uses when they talk about adding GPUs to VDI environments. The three big ones are vSGA, vDGA, and vGPU. All three of these involve physical GPU hardware installed into VDI servers.

Virtual Shared Graphics Acceleration (vSGA)

With vSGA, the physical GPUs in the server are virtualized and shared across multiple guest VMs. This option involves installing an Nvidia driver into the hypervisor itself, and each guest VM uses a proprietary VMware SVGA 3D driver that communicates with that Nvidia driver in ESX. The biggest limitation here is that these drivers only work with DirectX up to version 9.0c, and OpenGL up to version 2.1.

This is the oldest of the three technologies, with it having been introduced in Horizon View 5.2 in March 2013. The vSGA use case could be thought of as regular office who use PowerPoint and Visio and stuff, browsing the web, etc.

Virtual Direct Graphics Acceleration (vDGA)

Next is vDGA, where the hypervisor passes the GPUs to guest VMs directly. One of VMware's guys explained this like "the hypervisor is drilling a direct hole in itself between the GPU and the guest." With vDGA there are no special drivers in the hypervisor, and you run the "real" Nvidia driver in the guest VM.

The main advantage to vDGA is that since the GPU is passed through to the guest and the guest uses regular Nvidia drivers, it fully supports everything the Nvidia driver can do natively. So that includes all versions of DirectX, all version of OpenGL, and even CUDA.

The downside to vDGA is that it's expensive, since you need one GPU per user. (Even Nvidia's K1 and K2 cards only have four and two GPUs each, so you'll run out of physical room and PCI bus speed after just a few cards.)

VMware added support for vDGA in Horizon View 5.3 which came out last October. The target market for vDGA is high-end users with intensive graphical applications. (So this is where you have oil & gas, scientific simulations, CAD/CAM, etc.)

Virtual GPU (vGPU)

The third option is the vGPU, which is what VMware announced yesterday. (XenServer has had this for some time.) vGPU is essentially vDGA but with multiple users per GPU instead of one-to-one. Like vDGA, with vGPU you install the real Nvidia driver in your guest VMs, all versions of DirectX and OpenGL are supported, and the hypervisor passes the graphics commands directly to the hypervisor without any translation.

vGPU gives you all that plus the ability to share a GPU across up to 8 VMs. (Like all virtual resources, the exact number of users you can get per GPU will depend on things like application requirements, screen resolution, number of displays, frame rate, etc.) The idea with vGPU is that you get better performance than the vSGA option with a "divided by 8" cost factor when compared to the vDGA option. (For the cost of the GPU cards anyway.)

The use case for vGPU will be the higher-end knowledge workers who might need "real" GPU access, but who don't need full-on multi-thousand dollar graphics workstations.

Soft 3D

Even though this article is about VDI servers with GPUs installed, I'll mention for completeness that it's still totally possible to run a VDI server without physical GPU boards installed. (That's what the vast majority of VDI environments use today.) VMware calls this Soft 3D (for "Software 3D") and it leverages a regular VDDM graphics driver that can render DirectX and OpenGL on the CPU. Think of this as the baseline graphics option for VDI.

So what did VMware actually announce? Two things.

Yesterday's VMware / Nvidia press release was confusing because it was actually two announcements in one, and those two announcements are not really related to each other (apart from both being part of the larger Nvidia GRID brand).

The first part of the announcement is that VMware's Horizon DaaS (a.k.a. Desktone) platform now supports both vSGA and vDGA GPU virtualization options. (So these to options have been available in Horizon View for awhile, and now they're also options for Horizon DaaS.)

Of course just because these features are in the platform doesn't mean that all the DaaS providers will offer them immediately. It will take time for them to buy the cards, figure out if they'll kill their power bills, adjust their pricing, etc. The "launch partner" for this DaaS offering will be Navisite, which is interesting since that means they'll have it before VMware's own vCHS-based DaaS offering.

The other (completely unrelated) part of the announcement is that VMware will be adding vGPU support to ESX. They're saying it should be available in Tech Preview in last 2014 with general availability in 2015.

It's important to point our that since vGPU isn't until 2015, no VMware VDI product will offer vGPU support until then either. So for the next year or so, you cannot get Horizon View or Horizon DaaS—whether through VMware or from a partner—with the vGPU option.

VMware did say that customers can start with vDGA today and then seamlessly move to vGPU when it's available which makes sense since the hardware and the in-guest drivers and application support are the same. But that would be an expensive option today since those GRID cards are several thousand dollars each and you have to dedicate one GPU per user with the vDGA option that's supported today.

VMware also didn't say when vGPU support would be available for View and DaaS. We can hope it would be soon after it's added to ESX, but again that's 2015 at least.

Join the conversation

14 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Thanks for decoding that!


So, basically the vGPU (aka. NVIDIA VGX) support for View will come more than 2 years after Microsoft RemoteFX supported VGX .... Ouch!


What took them so long??? It must be really hard to develop 3rd party drivers for ESXi ..... :(


Or, more likely, VMware QA guys are just being extremely relentless in testing the driver ... :)


I'd be interested to hear from anybody who was involved in that GA Tech - Server 2012 VDI though .. ;)


www.microsoft.com/.../710000001661


Cancel

First of all - Thanks for translating from marketing to engineering language. I'm left wondering where hosted shared desktops fit into this. Will it be possible to add vGPU to XenApp servers as well and have all users take advantage of and share the GPU acceleration? Or is it only useful for Horizon/Xendesktop?


Cancel

"VMware did say that customers can start with vDGA today and then seamlessly move to vGPU "


Not sure it's applicable in CAO world. Because K1 is more corresponding for vDGA approach and K2 is better for vGPU  approach.


But why announce now for the next year ? buzz and marketing are interesting....


Cancel

All very interesting, but could somebody tell me how in the end rendered content is streamed to client over the internet ? Is Nvidia hardware responsible for doing this, are they using H.264 for this ? In PCoIP solution it was clear Terradici driver was responsible for this, but how does this work with Nvidia ?


Cancel

The PCoIP driver is still responsible for the transmission. It's essentially looking at the frame buffer after the GPU processes everything. (Though the exact way they interact varies depending on the type of GPU virtualization.) They key though is that PCoIP handles the streaming. It is not H.264.


Cancel

@Dock2Office. The PCoIP protocol is used by both vSGA and vDGA today and will work with vGPU when it is available avoiding all of the issues of using H.264 as a remote display protocol. This also means that the PCoIP Zero Clients and Hardware Accelerator Card can be used to further enhance the experience.


@brian a couple of small corrections. First, the difference is "divided by 4" not 8 since a K2 card is "divided by 2" in vDGA mode. Second, at 8 users per K2, the performance difference delivered to the end user is not as different from vSGA than you would think provided the application is running in DX9 or OGL2. The big difference is when the app has been optimized for later versions of DX or OGL. The latter is the reason we are excited to see vGPU support in ESX.


Cancel

That is going to give me one more thing to think about when conducting a VDI assessment.  Right now no assessment tool gives you a good mapping of GPU performance from physical to virtual.  


My eVDI (e for engineering) engagements have focused on vDGA lately.  Of course that keeps things simple but also greatly reduces user density (max 8 users per HP WS460 blade).  I can see vGPU potentially getting user density up on this blade or a DL380 with two or three GRID cards.  But I'll need to get some solid performance number to insure the UX doesn't suffer..


Cancel

@Brian, @Randy thanks for your fast response. So framebuffer compression (PcoIP) is still done in software or hardware ? No offloading to Nvidia hardware ?


Why not using H.264 streaming which is within Nvidia hardware ?


Cancel

@Randy, with Amazon AWS in the air its interesting to see what they have done. As mentioned they also use PCoIP, but it seems they run it on top of RDP.


What part is Nvidia playing in this setup, or is there no Nvidia involved ?


Cancel

AWS WorkSpaces does not use GPUs or anything from Nvidia at this time. Users can connect to their WorkSpace via either PCoIP or RDP. It's an either/or thing and the two don't work together. (Just like with VMware View where you can connect either via PCoIP or RDP, or XenDesktop where you can use HDX or RDP.)


Hopefully someday AWS will add GPU support to WorkSpaces, but they have not announced anything around that yet.


AWS has another offering called AppStream which is where Windows app developers can modify their app so it runs on AWS. In that case the AWS VM has a GPU and the UI is sent down via H.264, but that's a completely separate offering from WorkSpaces.


AWS also has EC2 instances with Nvidia GPUs, but again that's different from both WorkSpaces and AppStream.


Cancel

@Dock2Office. Yes, as Brian indicated, no GPUs were harmed in this release of WorkSpaces. Since typical enterprise applications make little use of 3D, a soft GPU on the CPU is more than adequate. Hopefully, this is just the first of many WorkSpaces Bundles that AWS offers. Certainly seems popular enough to warrant more options.


On the H.264 encoder question, the short answer is that H.264 was designed for streaming video content consisting of primarily natural images (e.g. film) which makes it sub-optimal for delivering an interactive desktop consisting of primarily artificially generated images (e.g. text and graphics). Because of this, when H.264 is used to remote a typical desktop or application, it consumes significantly more bandwidth while delivering lower quality images than codecs optimized for text and graphics. Furthermore, the way H.264 is decoded on most client devices (especially mobile devices), significant latency is introduced which causes the user to overshoot and get frustrated with the application. In contrast, the PCoIP protocol was designed specifically for remoting desktops and applications and is able to avoid these issues. The PCoIP Hardware Accelerator card encodes the pixels using customized silicon and provides the image quality and frame rates expected from 3D applications making it the perfect complement to any GPU for 3D applications.


@Brian. Saying that WorkSpaces “supports” RDP is a bit misleading. Of course, unless the administrator disables RDP, an RDP client can be used to access any Windows OS if you are on the same network and know the IP address of the VM. None of this is automatically brokered by Amazon nor supported by the WorkSpace clients, so it is not the intended way to access your WorkSpace. Besides, why would you go to all that trouble to avoid using the better protocol ;-)


Cancel

Hello,


quite old post now, but I haven't understood if the use of SVGA and vDGA, waiting for vGPU, is possobile also from a plain vSphere Hypervisr (Essential or Enterprise) or if it is mandatory to have Horizon View....


Gianluca


BTW: at the beginning there is a "CPU" that has to be changed in "GPU" even if it is clear that you mean GPU:


"The three big ones are vSGA, vDGA, and vCPU...."


Cancel

in my post SVGA --> vSGA ;-)


Cancel

Fixed the typo, thanks for pointing that out! :)


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close