When we had Benny Tritsch and Tim Mangan on our podcast a few weeks ago, Benny mentioned the new N-series VM instances from Azure. These N-series VMs leverage NVIDIA’s Tesla line of cards, and since we haven’t written about them yet, I wanted to take a minute to explain them.
There are two different instances in the N-series family of VMs: NC and NV. They’re both based on NVIDIA, but serve very different purposes.
The NC instances made available in Azure are designed to provide cloud-based GPU compute resources for data-intensive operations that require lots of parallel operations. We’ve all heard the examples: ray traced rendering, deep learning, geologic data, and so on. These instances are based on the Tesla K80 card, which, with its 4992 CUDA cores means that it is a parallel processing machine made for big data.
These NC instances are not for use with virtual desktops so they have no real value to us, but if you’re into the big data and whatnot, they’re there if you need them. You can get an NC instance with anywhere from six CPU cores and a single K80 to 24 CPU cores and four K80s–enough to make the propeller fly right off your hat.
Pricing on NC instances running Windows (which are still in preview at this time, and only available in the South Central US region) falls between $0.66/hr and $2.99/hr.
Here’s where we get to the stuff that desktop virtualization people care about. Based on the Tesla M60, the NV instances from Azure are available in VMs with 6, 12, or 24 CPU cores, each with 1, 2, or 4 NVIDIA M60 GPUs, respectively. At this point, though, the NV instances are still using pass-through, which means that you’re not going to be able to slice those GPUs up (which makes sense, because to do that right now you’d have to run VMs in your VMs).
Being pass-through only is not a show-stopper, though, because you can still use these VMs to host high-end workstation workloads or as RDSH servers. Pricing for the NV instances starts at $0.73/hr and runs up to $2.92/hr, which means that for around $1500 per year (2080 hrs/year multiplied by $0.73/hr) you can run the lowest level instance, which is still six CPU cores and a dedicated M60 card.
Like NC, NV instances are also in preview at this time, and only available in the South Central US region, however you can expect to see them appear in other regions as we get closer to general availability.
I like this becaues it takes the some of the complexity out of using GPUs (especially NVIDIA GPUs with the separate hardware and software costs). We haven’t yet seen widespread use of GPUs at DaaS providers, and what we have seen thus far is usually (but not always) using pass-through instead of GPU virtualization. Depending on pricing/packaging this might not matter to you. If you’re holding out, there is at least one company I’m aware of, Cloudalize, working on an entirely vGPU-based offering. You can find out more about them and follow their progress at gdaas.com.