NVIDIA GRID 2.0: Twice the resources and a new software platform (a video from VMworld 2015)

At VMworld, Nvidia announced Grid 2.0, and as Jared Cowart describes in the above video there’s a lot to talk about. First, NVIDIA has turned GRID into a software platform that is run separately from the hardware. The cards themselves are now branded under the Tesla name, which has become their data center brand of hardware. Let’s dig into the hardware first.

There are two new cards, the Tesla M6 and the Tesla M60. The M6 is an MXM card designed for use in individual server blades or in add-on mezzanine blades in supported chassis. Each card features a single Maxwell GPU with 1,536 cores and 8GB of memory. The M60 is a PCIE 3.0 dual slot card with two Maxwell GPUs, totaling 4,096 cores and 16GB of memory. It replaces both the K1 and the K2 card, and NVIDIA’s marketing states that it can give you 2X what you had before. 

Now you might be asking, “2X of what?” because that’s what I did! Basically, the messaging means that there are twice the resources available to you than you had before. More cores, more memory, newer architecture = more performance. You can get 2X (though I’m positive your mileage will vary) the number of users, or 2x the performance per user on the new platform, or some mix in between. Just to be clear, you’re not doubling the number of users AND doubling the performance. 

Though it replaces the K1 and K2 cards, those aren’t going away. They’ll still be supported for the foreseeable future.

As for the new GRID 2.0 software platform, NVIDIA has decided to license GRID on a per-user basis as opposed to simply selling you a card and letting you use it as you see fit. As they grow their hardware’s capabilities, I guess it makes sense. The card costs the same, but the number users grows, so they’re going to look for new ways to make money. I can see the community liking this as long as the card prices come down (since there wasn’t any additional licensing in GRID 1.0), but if card prices stay the same and you have to add on a licensing fee, the natives might get restless. I haven’t seen pricing for either the platform or the cards, so we’ll have to see what happens.

However that shakes out, though, we do have some details on the different pricing tiers of the GRID 2.0 platform. There will be three tiers covering different use cases. Each tier is delineated by the amount of framebuffer vRAM allocated to each user. The lowest tier offers up to 2GB per user and is intended for everyday, run of the mill users. The middle tier offers up to 4GB per user, so it’s intended for middle of the road use cases as well as Linux users (if you’re using Linux, you have to have the middle tier). Last, the top tier can go as high as 8GB per user, and adds in support for CUDA and OpenCL instructions. Below is a chart with more details about the differences between the tiers, which I grabbed from a data sheet describing the entire GRID 2.0 platform:

It’s pretty amazing to see how far we’ve come with regards to graphics. Packing more performance on to these boards only helps further the case that all virtual desktops should have a slice of GPU attached to them, and depending on how much the hardware and licenses cost, we might finally start seeing this in more situations. If you have any more information to share on the cost or how these changes affect your environment, please share it in the comments.

View All Videos

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close