GPUs in VDI need to stop being "optional"

The title of today's article is a quote from Dane Young, our guest on yesterday's podcast. We were talking about one of Dane's BriForum 2015 London presentations where he talked about the state of GPUs in VDI, and this is how he summed it up: "I'm trying to convince people that GPUs should not be optional for VDI.

The title of today’s article is a quote from Dane Young, our guest on yesterday’s podcast. We were talking about one of Dane’s BriForum 2015 London presentations where he talked about the state of GPUs in VDI, and this is how he summed it up:

“I’m trying to convince people that GPUs should not be optional for VDI."

I never thought about it quite like this before, but I wholeheartedly agree. I mean look at regular desktops and laptops. How many of them can you buy without a GPU? And if you price out a business desktop and then you have to shave off some costs, do you ever do that be removing the GPU? No! You dial back the CPU, cut down on memory, or maybe don’t do SSD.

So by that logic a GPU should be 100% required for VDI, and if you don’t like the “added” cost of it, then you can offset it by putting 10% more users per server. Sure, that means that each user will get 10% less CPU and RAM, but again, that’s a tradeoff we make every day with physical desktops, so why wouldn’t we do that with VDI too? (And plus having a GPU might actually relieve some load from your CPU meaning you can actually fit more users on a box.)

While we’re on the topic of GPU and costs, I’m still hearing a lot of push back from people that adding GPUs are expensive. Those Nvidia K1 cards are $3k for only 32 users! (Actually here’s one on Amazon for $1850. Good for them!) So $1850 for 32 users is $57 per user. How much are you spending on VDI hardware and storage already? Probably $500 a user? So adding a real GPU is going to add 10% to your total hardware build cost.

Seriously, how is the GPU not just a standard thing that is in every VDI environment? Seriously, these are desktops, with desktop applications! The GPU isn’t just about CAD and 3D rendering anymore. It’s involved in everything in Windows. When it comes to VDI, the GPU is not optional. Get with it man. It’s 2015.

Join the conversation

15 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

I totally agree.


The same goes for RDSH scenario's. A good UX is depending on GPU's more and more these days.


Back in the days the Battery Backed Write Cache modules (for HP) became a default hardware config for the Terminal Servers, next up the GPU.


A physical RDSH host with one NVIDIA GRID K1 really enriches the UX for, say 4 virtual RDSH VM's with GPU pass-through.


Cancel

would be nice when xenapp can use shared gpu's on Hyper-v, at present this is only an option on vmware or xenserver but would be great for my environment (or other hosting providers that are using hyperv).


at present we have to retain vmware just for this purpose.


Cancel

But what about bandwidth?  Going to a gpu is going to increase the graphics data sent to the end point.


Also you are going to have to up the CPU as the session requires more compute before it sends graphics on the stream.


There are some sessions that just don't need GPU.  I still see it delegated only to those users that require it.


Cancel

I absolutely agree.  I've been doing a lot of vGPU and vDGA deployments in PoCs for engineering and design use cases but the UX is so incredibly important in VDI that it should be a requirement for lower performance use cases as well.  


One potential issue is that existing assessment tools out there are still a bit lacking in measuring GPU performance on desktops that are targeted for virtualization.  Lakeside's Systrack gives cache utilization and the amount of time a desktop app uses DirectX or OpenGL but we still don't get information about frame rates, GPU cycles, etc.  So it is a little more challenging to conduct sizing.  


There are also limitations on systems using Nvidia cards.  An HP DL380 can only use two K1 cards and thus only 64 users.  The 380 can usually handle quite a few more users than this.  However Nvidia's next gen cards should greatly increase the numbers.  AMD is getting into the game soon as well but I think they have a lot of catching up to do.


Because I'm an HP employee, I would be remiss if I didn't mention that the Moonshot desktop solution has a GPU for every end user.  A single 4U moonshot can give you 180 persistent or non persistent desktops using Citrix XenDesktop.


Cancel

I've been saying the same for a long time.  The benefits are substantial on all but the most undemanding workloads. Packaging is still a challenge though, especially when it comes to appliances - Rick's HP DL380 might only be able to hold two K1's but that's two more than any EVO:RAIL appliance. Nutanix comes closest to offering a decent GPU enabled VDI appliance, but its Sandy Bridge generation NX-7000 is looking decidedly long in the tooth (I can't help but think that Nutanix isn't focusing on VDI as much as it used to).


Moonshot is a great platform, but it's Intel Iris GPUs are not exactly workstation-class hardware and it is I suspect rather expensive (I've no idea what street price is).  For GPU use to really take off we are either going to have to find better ways to manage the imbalance of GPU resources to CPUs, or someone will have to bend some metal and launch some decent appliances with the space/power needed to support more GPU boards.


Cancel

I don't agree with the notion that having a GPU means that you're moving more pixels which will require more network. In this case, I'm not talking about using GPUs to enable new apps which wouldn't have been used anyway, rather, I'm talking about adding GPUs to all environments where the users will keep doing whatever they've been doing before. A GPU-enabled web browser is going to do the same things regardless of whether a user has a GPU. It's just far more efficient on the CPU if a GPU is there.


Cancel

Brian - couldnt agree with this more.  The notion that GPU's are only for Virtual Workstation based high end graphics workloads is a partial consideration.  The reality is that Windows Vista introduced the concept of fundamental co-processing built into the OS.  Word is GPU accelerated.  You can disable graphics acceleration on Word and see dramatic shifts in CPU utilization.  The cheapest device at best buy has a GPU in it, why shouldnt an Operating System such as Windows 7/8 have GPU when if can fundamentally leverage it and ISV's have been coding their apps for literally 10 years to take advantage of GPU.  Windows 10 will double idle GPU ram requirements, there wont be any more trimming of GPU assets.  General Purpose GPU regardless of VDI profile is fundamentally required, people need to realized this and build accordingly.  We now have the capability with NVIDIA GRID and can address users across the spectrum.  GPU FTW with VDI!


Cancel

Yes the base cost of a GRID card does not add a lot - but because of the reduced density of users per server as Rick mentioned above, you then need to buy more hardware to support the same number of users.  In other cases, eg IBM/Lenovo Flex chassis you add a GRID card to each blade you halve the blade density for your chassis, meaning extra chassis and associated switches etc are required.


Also potentially extra hypervisor licensing because you have more hosts, more power/cooling, there's a lot more to it than just the base cost of a GRID card unfortunately.


Cancel

I don't think that users need GPU's all the time but I do agree that users need them for more than just high end engineering applications. I have setup a scenario using RES software that users can request access to vGPU resources. They can even choose the vGPU profile needed to complete the project they are working on and how long they need it. This allows companies to get greater density per host for traditional VDI while still allowing users to use those GPU's when they really need them. This may change as applications become more and more GPU intensive but today if the underlaying hardware is solid, then most users don't need that GPU for day to day tasks. So giving them the ability to request those resources on a case by case basis without the need for IT intervention works in a variety of scenarios. This allows companies to get a great bang for their buck without sacrificing density on hosts for normal Task/knowledge workers.


Cancel

I think most of you above are looking at dedictaed vGPUs.  Yes that's needed for engineering / cad /  oil type applications, but lots of the basic office / windows UI type applications benefit quite a bit from shared GPU.


Shared GPU should become standard for VDI and dedicated only for those that need it.


Cancel

I am thorn... GPU certainly improve user experience, but the downside is not the GPU price "but the reduced density of users per server as Rick mentioned above, you then need to buy more hardware to support the same number of users.", "also potentially extra hypervisor licensing because you have more hosts, more power/cooling, there's a lot more to it than just the base cost of a GRID card unfortunately." - I agree with these comments.


At Nutanix we offer nodes ready for GPU (we actually ship with the GPU card pre-installed) and the uptake for such nodes has been unexciting in comparison to the high-density footprint provided by 4-node 2U boxes. I am sure customers are doing their TCO and ROI before acquiring the solution; they are also doing POCs as asking users about their overall experience.


I also agree with Simon Bramfitt that Nutanix should have GPU ready nodes with Haswell and I am taking that up to Product Management. Nutanix has full focus on VDI and we not only sell the solution, but we also consult with the services org to major enterprises doing major deployments.


I wrote this article myvirtualcloud.net where I say GPU is a nice add-on, but not a requirement, and I stand by it for now. I know things are changing, but I don't see it as necessary for all VDI use-cases when there are currently millions of VDI users happily running their desktops without GPU. However, I do think GPU should come natively as part of the processor or integrated to the motherboard in the future.


BTW – I have posted this from a VDI session without GPU, and using RDP8 only.


Cancel

Great discussion. That's exactly why the assessment to determine how much GPU is right for each user is just as important as the assessment on which applications are required.


Of course, it's not a one-time thing, but really an on-going process of constantly looking at the data to respond to changing requirements and behavior and to provide a great user experience.


@RickBoyett: I am with Lakeside Software and we have implemented various APIs since our version 7.0, which started shipping about 18 months ago. We do capture GPU utilization, frame buffer, memory, and more operational parameters as well as the GPU hardware specs. We developed several planning tools and reports that allow organizations to look at existing GPU loads and translate them into the proper vGPU profile. You should check it out when you have the chance on our blogs or the recordings of the GPU Tech Conference presentations in 2014 and 2015.


Cancel

We have come to realize this as well. With the growing hardware acceleration trend in software, things like Office, Internet Explorer, Chrome, etc, are starting to run very poorly in a VDI environment. Our customers love Terminal services for all the benefits it include , but hate actually using to browse the internet.


We will be POCing GRID on vSphere in the next several weeks on XenApp 6.5 and 7.6 to see if that's the right tool for the job.


One thing I hope to find in the POC, the differences between the different modes of GPU presentation in vSphere. There doesn't seem to be alot of work done on Virtualized XenApp servers out there.


Cancel

I agree with this assesment and have had Nvidia K1s running in 10 hosts for a 12+ month period. We have been able to increase user density per host. GPU benchmarking is non-existent at the moment and determining exact CPU savings is difficult.


I also saw the "32 user" limit mentioned and I haven't seen any evidence to say 512MB per user is needed. We currently are using 256MB per user. One tip is that you can prioritize hardware acceleration at the global level in Horizon View for specific users and also run software or  vGPU pools depending on user requirements.


Cancel

@TechMassey


Can you clarify what you're doing here, if you're using 256 MB per user on a K-1 then I assume you are running the K 100 Q profile which I didn't think was supported anymore and even when it was it was limited to 32 users per board.


Thanks


Simon


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close