NVIDIA hopes that in the future, remote VDI won't mean a worse user experience than local computing - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

NVIDIA hopes that in the future, remote VDI won't mean a worse user experience than local computing

Written on Nov 18 2011 9,448 views, 4 comments


by Brian Madden

Last week, Jack, Justin, and I drove down to Santa Clara to visit chipmaker NVIDIA. This was the first time that I really talked to anyone at the company, having just realized that they have a play in the virtual desktop space after their announcement with VMware last month for future GPU support in VMware View. (I didn't know at the time that NVIDIA was involved with Calista/Microsoft for the creation of RemoteFX and Citrix for HDX 3D Pro.) So our visit was to get a general understanding of the company and their take and on the world of desktop virtualization.

The NVIDIA folks that we met with--including Will Wade, the NVIDIA dude from the Facebook video from Siggraph--explained that they've actually been involved the remote desktop space for five or six years, although it's only in the past year that they really feel like they can see their ultimate goal starting to be met.

NVIDIA's remote desktop vision

Most of our conversation was about NVIDIA's goal for remote desktop computing (i.e. accessing remote desktops or VDI via RDP, HDX, PCoIP, etc.) At the most basic level, NVIDIA has two major beliefs here:

  • They believe that in today's world, choosing to remotely access a datacenter-based desktop (VDI, etc.) means that the user must accept a worse user experience than a desktop running locally on a client device. They believe that will change.
  • Today's remote desktops that require "real" GPU access are only available on a one-to-one basis. (Either one-user-per-blade or one-user-per-GPU.) They also believe that that will change.

Both of these are being addressed via what NVIDIA is calling Project Monterey, also known as their Quadro Virtual Graphics Platform, or "Quadrio from the Cloud." (Quadro is just the professional/workstation version of what most of us know as "GeForce.") I covered the details of Project Monterey in the article I wrote last month, so take a look there if you're not familiar with it.

As I mentioned before, NVIDIA has been working in the remote graphics space for five or six years now, so let's take a step back and look at how they got to Project Monterey.

Some history

Back in 2003-2004, NVIDIA was working on a program called "Sabre," a partnership with HP to help them try to figure out how to stick GPUs in servers. The end goal back then was around HP's RGS protocol and workstation blades.

Then in 2005-2006, NVIDIA worked with Citrix on Project Pictor. Pictor is something that many of you are probably familiar with. It was a special version of Citrix Presentation Server (called "Citrix Design Studio") that was enabled to leverage GPUs to run 3D applications. This was before the days of VDI, so it was only focused on the Terminal Server multi-session solution. Citrix made a big deal out of this at iForum talking about how it was based on some custom work Citrix did for Boeing to let them use CATIA via ICA.

I didn't realize that NVIDIA was involved at the time, but the NVIDIA folks I met with last week explained that a lot of work went into Pictor, both from Citrix and NVIDIA. They felt that some of the Pictor capabilities were "good enough" for some situations, but in general it wasn't good enough to tip the scale in a major way.

Then once Citrix got into VDI, they took what they learned in Pictor and made it work on XenDesktop via Project Apollo (2008). Back in those days I think we were mostly interested in using it to get Vista Aero glass working from VDI, but doing so did require a GPU. Again, there were a lot of alphas and experimentation, and again NVIDIA was involved, but Apollo never really took off.

After learning a lot of lessons in Pictor and Apollo, Citrix went back to the drawing board and ended up with Project Prism which ultimately became HDX 3D (2009) and HDX 3D Pro (2010).

The current version of HDX 3D Pro that's available with the latest XenDesktop running on XenServer is very similar to what VMware intends to do with NVIDIA. It provides for one-to-one GPU pass-through to VDI virtual machines, allowing the VM to have "true" GPU access with real drivers and the ability to do whatever is needed (DirectX, OpenGL, GPU-assisted movie rendering, etc.). So all the heavy work is done on the remote host with the GPU, and the full images are sent to the client via HDX or PCoIP where they're accessed like any remote app.

Meanwhile NVIDIA was also having conversations with the folks at Calista (later bought by Microsoft) as they were developing what ultimately became RemoteFX. Again I'm not sure how much of that I knew at the time, but after talking to the NVIDIA folks last week, it's clear they were very involved.

And NVIDIA's work in the remote graphics area isn't just limited to business apps and VDI. Two of the largest cloud gaming companies, OnLive and Gaikai, use NVIDIA technology to deliver their games remotely. (Think of these things like VDI for video games.) OnLive is using NVIDIA GPUs to handle the compression for the image remoting, and Gaikai is using the more general NVIDIA Project Monterey pixel-level output access I wrote about last month.

So throughout all of this--Boeing, Citrix, Calista, Microsoft, VMware, OnLive, Gaikai--NVIDIA was learning more and more about what it takes to deliver graphics over the network. This is what led to Project Monterey and where they learned what they need to do in general as they move their graphics to the cloud.

What's next for NVIDIA for remote graphics?

If you look at the history of NVIDIA in this space over the past few years and then look at the two core beliefs I outlined at the beginning of this article, it's actually pretty clear where NVIDIA is going.

First, they want to provide the hardware, software interfaces, and consulting help to enable companies like Citrix and VMware to deliver high quality remote desktops, even when intense graphics or general GPUs are needed.

The second piece that we have to assume is coming is actual GPU virtualization (or "vGPU"). That's not something that we have right now. The Citrix HDX 3D Pro solution is a one-to-one GPU pass-through, and that's what this first VMware View solution will be. As RemoteFX connecting to Windows 7 VMs on Hyper-V, that's not true GPU virtualization either. (Each VM sees a WDDM driver that can use the host's shared GPU to do things that require a GPU, like DirectX, but the VM doesn't see a real GPU.) As Jack wrote about yesterday in his article about blade workstations, needing access to a real GPU is one of the main reasons people use blades today. But if we could pop some GPUs into a server and virtualize them across many VDI sessions, that'd be pretty powerful.

NVIDIA is quick to point out that they nor their partners like VMware, Microsoft, or Citrix haven't actually announced anything around real GPU virtualization, be pretty much everyone knows it's coming. NVIDIA is also quick to point out that there are lots of ways to get GPUs into servers today (and there will be more ways in the future), and let's be honest: if you're building that hardware into your server, then it's going to be virtualized and shared across VMs.

What if we had real performance from remote desktops?

All of this leads to some interesting thinking. If NVIDIA plus Citrix/VMware/Microsoft actually get this right, and if we see enough GPUs in servers to make it all happen, what does that mean for us and VDI? I mean I've personally always operated on the philosophy that a remote desktop is a worse user experience than a locally-executed desktop. So for me currently, settling for a remote desktop would be based around some kind of major tradeoff, like I would trade the good user experience for the flexibility of having access to my same desktop from anywhere.

But imagine for a minute (regardless of whether you think it's possible or not) if we could have the same user experience with a remote desktop that we get on a local desktop? What could that mean for the future of Windows apps? Does this make it even easier for all of our Windows apps to become middleware?

NVIDIA also showed us a demo of very high quality 3D graphics running in a remote session but accessed via an Android thin client. I wondered whether this would be a non-starter in the grand scheme of things once every application was ported to a tablet OS, but NVIDIA pointed out that there are still a lot of the traditional server-based computing advantages that would apply here, including having to deal with huge datasets that can't be loaded onto an iPad and not having to write new versions of the software for every platform.

To be honest, if the cloud hosting was a possibility, that's very attractive to people.

Let's go back to the triangle

Remember that article I wrote a few years ago about the remote protocol balance triangle? You have (1) good user experience, (2) low CPU consumption, and (3) low bandwidth consumption. Pick any two:

Desktop protocol triangle

How does an NVIDIA future fit into here? Obviously we have the good experience, so what do we have to trade for it? Low bandwidth or low CPU? I guess in the case of NVIDIA, the GPU is your "high CPU," so maybe that's what you're giving up? You have a good experience with a GPU, so you can still keep your low bandwidth?

The bottom line, according to NVIDIA, is that VDI has reset user expectations. (Meaning that VDI has reset them "down" in that now users expect a worse experience.) NVIDIA wants to change that.

 
 




Our Books


Comments

Issy Ben-Shaul wrote re: NVIDIA hopes that in the future, VDI won't mean a shittier user experience
on Fri, Nov 18 2011 12:54 PM Link To This Comment

Two comments:

1. Quite often user experience issues with remote desktops are related to latency, not bandwidth. I was on a Starbucks yesterday and measured 150 ms of RT latency over the wi-fi to the server, RDP was very shitty...  

2. You don't necessarily need to trade off universal access with good user experience. What if you could keep a virtual syncrhonized clone of your desktop at the data-center for universal access from any device, yet continue to work locally on your desktop/laptop when you have it ?

Derek Thorslund wrote re: NVIDIA hopes that in the future, remote VDI won't mean a worse user experience than local computing
on Wed, Nov 23 2011 9:50 AM Link To This Comment

I recently met with a Citrix XenDesktop HDX 3D Pro customer in Denmark and was surprised to hear them say that their user experience is actually better than it was before they moved to desktop virtualization. How could that be? Well, they work with large 3D models using apps like Bentley, Revit, AutoCAD and Navisworks. It used to take a long time just to read in these large models from the database. Now that happens over Gigabit Ethernet in the data center. No doubt using HP blade workstations (plenty of processing power) and the latest NVIDIA Quadro graphics cards also helps achieve a great user experience. This is with laptops as endpoints, so the users have a new freedom of mobility that wasn't possible before, which is valuable in their business. "And it even works on 3G!"

Tim Mangan wrote re: NVIDIA hopes that in the future, remote VDI won't mean a worse user experience than local computing
on Sun, Nov 27 2011 10:30 AM Link To This Comment

I recall meeting with NVIDIA in 2006 and talking to them about the need to leverage the server GPU to help terminal services.  I think the project was Radeon back then and while they were clearly only in an early stage of understanding the remoting problems, they were starting to work the space back then.

I think that the GPU is becoming a really hot space right now.  In addition to the focus of this article, there is some neat work going on to use the power of the GPU for certain types of computing.

Basically, the nature of the GPU is that it can do math, especially floating point, much faster than the CPU.  So certian apps that interate over data repeatedly can run much faster.  This is leading to interest in CPU software frameworks to manage this in a standard way.  More of a finge thing than general purpose, apps such as weather prediction are the appropriate candidates here.

Ultimately, the idea of the CPU being the most important part of the computer may just go away as specialized components are developed to perform specialized actions.  The great thing about custom silicon instead of general purpose (the CPU) is that it can be very fast and power efficient at what it needs to do.  Ultimately this leads to different GPU designs in the server versus the mobile device.  NVIDIA seems to be very active in this space..

Adam Oliver wrote re: NVIDIA hopes that in the future, remote VDI won't mean a worse user experience than local computing
on Tue, Dec 6 2011 1:47 PM Link To This Comment

It works very well over 3G.  I watched a partner demonstration using HDX 3D Pro on XD 5.5 over a 3G BlackBerry tethered connection.  We were in San Antonio at a conference, their VPN concentrators were in Arizona and the data center was on the east coast.  It was great to see something I helped with perform so well.

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.