Yesterday at the GPU Tech Conference in San Jose, VMware announced that they're adding vGPU support to vSphere and that they're bringing Nvidida GRID technology to their Horizon DaaS platform. I thought this meant vGPUs for DaaS, but that's not right. Reading the press release is like decoding a grammatical logic puzzle, but thanks to the help of three or four VMware and Nvidia folks here at the conference, I've gotten it straightened out.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The press release is starts out with, "NVIDIA and VMware Bring Graphics-Rich Virtual Desktops and Applications to Public Clouds: VMware Horizon DaaS Platform With NVIDIA GRID Technology Improves . . ." It goes on to say, "Coming Soon: NVIDIA GRID Virtual GPU on Virtual Machines."
Let's take a look at what this announcement is really about.
To understand it, we first have to dig into all the product marketing terms VMware uses when they talk about adding GPUs to VDI environments. The three big ones are vSGA, vDGA, and vGPU. All three of these involve physical GPU hardware installed into VDI servers.
Virtual Shared Graphics Acceleration (vSGA)
With vSGA, the physical GPUs in the server are virtualized and shared across multiple guest VMs. This option involves installing an Nvidia driver into the hypervisor itself, and each guest VM uses a proprietary VMware SVGA 3D driver that communicates with that Nvidia driver in ESX. The biggest limitation here is that these drivers only work with DirectX up to version 9.0c, and OpenGL up to version 2.1.
This is the oldest of the three technologies, with it having been introduced in Horizon View 5.2 in March 2013. The vSGA use case could be thought of as regular office who use PowerPoint and Visio and stuff, browsing the web, etc.
Virtual Direct Graphics Acceleration (vDGA)
Next is vDGA, where the hypervisor passes the GPUs to guest VMs directly. One of VMware's guys explained this like "the hypervisor is drilling a direct hole in itself between the GPU and the guest." With vDGA there are no special drivers in the hypervisor, and you run the "real" Nvidia driver in the guest VM.
The main advantage to vDGA is that since the GPU is passed through to the guest and the guest uses regular Nvidia drivers, it fully supports everything the Nvidia driver can do natively. So that includes all versions of DirectX, all version of OpenGL, and even CUDA.
The downside to vDGA is that it's expensive, since you need one GPU per user. (Even Nvidia's K1 and K2 cards only have four and two GPUs each, so you'll run out of physical room and PCI bus speed after just a few cards.)
VMware added support for vDGA in Horizon View 5.3 which came out last October. The target market for vDGA is high-end users with intensive graphical applications. (So this is where you have oil & gas, scientific simulations, CAD/CAM, etc.)
Virtual GPU (vGPU)
The third option is the vGPU, which is what VMware announced yesterday. (XenServer has had this for some time.) vGPU is essentially vDGA but with multiple users per GPU instead of one-to-one. Like vDGA, with vGPU you install the real Nvidia driver in your guest VMs, all versions of DirectX and OpenGL are supported, and the hypervisor passes the graphics commands directly to the hypervisor without any translation.
vGPU gives you all that plus the ability to share a GPU across up to 8 VMs. (Like all virtual resources, the exact number of users you can get per GPU will depend on things like application requirements, screen resolution, number of displays, frame rate, etc.) The idea with vGPU is that you get better performance than the vSGA option with a "divided by 8" cost factor when compared to the vDGA option. (For the cost of the GPU cards anyway.)
The use case for vGPU will be the higher-end knowledge workers who might need "real" GPU access, but who don't need full-on multi-thousand dollar graphics workstations.
Even though this article is about VDI servers with GPUs installed, I'll mention for completeness that it's still totally possible to run a VDI server without physical GPU boards installed. (That's what the vast majority of VDI environments use today.) VMware calls this Soft 3D (for "Software 3D") and it leverages a regular VDDM graphics driver that can render DirectX and OpenGL on the CPU. Think of this as the baseline graphics option for VDI.
So what did VMware actually announce? Two things.
Yesterday's VMware / Nvidia press release was confusing because it was actually two announcements in one, and those two announcements are not really related to each other (apart from both being part of the larger Nvidia GRID brand).
The first part of the announcement is that VMware's Horizon DaaS (a.k.a. Desktone) platform now supports both vSGA and vDGA GPU virtualization options. (So these to options have been available in Horizon View for awhile, and now they're also options for Horizon DaaS.)
Of course just because these features are in the platform doesn't mean that all the DaaS providers will offer them immediately. It will take time for them to buy the cards, figure out if they'll kill their power bills, adjust their pricing, etc. The "launch partner" for this DaaS offering will be Navisite, which is interesting since that means they'll have it before VMware's own vCHS-based DaaS offering.
The other (completely unrelated) part of the announcement is that VMware will be adding vGPU support to ESX. They're saying it should be available in Tech Preview in last 2014 with general availability in 2015.
It's important to point our that since vGPU isn't until 2015, no VMware VDI product will offer vGPU support until then either. So for the next year or so, you cannot get Horizon View or Horizon DaaS—whether through VMware or from a partner—with the vGPU option.
VMware did say that customers can start with vDGA today and then seamlessly move to vGPU when it's available which makes sense since the hardware and the in-guest drivers and application support are the same. But that would be an expensive option today since those GRID cards are several thousand dollars each and you have to dedicate one GPU per user with the vDGA option that's supported today.
VMware also didn't say when vGPU support would be available for View and DaaS. We can hope it would be soon after it's added to ESX, but again that's 2015 at least.