Intro to Elastic GPUs, a new feature of AWS EC2

Until now, GPUs were tied to specific hardware instances, meaning you paid for them even if you didn't run apps that used the GPU. With Elastic GPUs, AWS has introduced pay-as-you-go GPUs.

Delivering GPU-enabled desktops from the cloud has, until now, involved using dedicated GPU instances for each user. This is acceptable in higher-end use cases, but it becomes difficult to justify the additional expense for more typical rank-and-file users. AWS has taken a step towards the next era of cloud-based virtual GPUs by releasing Elastic GPUs as part of EC2.

Elastic GPUs allow you to give exactly as much GPU as a user needs (for certain workloads…more on that later), and you only pay for what you use. Think of it this way: AWS has a giant bucket of GPU resources, and your users pay as little as $0.05 per hour for a tiny slice of it, as opposed to $0.76 per hour for the cheapest traditional AWS GPU option.

The interesting bit about how this works is that rather than directly attaching a GPU to VM instances, Elastic GPU works by intercepting OpenGL calls with the graphics driver and shipping them across the network to AWS's elastic GPU platform. There, the GPUs do their thing before sending the rendered results back to desktop/application across the network again.

GPU performance is limited by the instance of Elastic GPU assigned to a VM, and there are four different instances with varying amounts of GPU memory, from 1GiB ($0.05 per hour) to 8GiB ($0.40 cents per hour). By decoupling the GPU from the hardware instance, you can mix and match the amount of GPU deployed to your users on a case-by-case basis, so you have more flexibility and lower costs than ever before.

The biggest limitation of Elastic GPUs is that the platform only supports OpenGL workloads (version 3.3 and below). That means you're really only using this for end users that have specific applications that demand OpenGL for graphics rendering. Elastic GPUs will not work with DirectX or any other standard GPU APIs, which unfortunately means that the GPU-accelerated bits of Microsoft Windows 10, Edge, and Office 365 won't get any help. There is also no GPU-accelerated video encoding.

As you can imagine, leveraging the network means you also have to closely watch the amount of bandwidth allocated to hardware instances to ensure that there's enough available for the user, applications, and now graphics rendering.

According to Frame, which has been testing Elastic GPUs for a while and is currently supporting them on their DaaS platform, they've seen good results with Google Earth, ANSYS AIM, SolidEdge, and Photoshop, all of which use OpenGL. They say a CPU instance with 16GB of RAM combined with a 4GB Elastic GPU instance performs as well as a 16GB g2.2xlarge GPU instance, but at about half the cost!

While Elastic GPUs are very cool and will certainly be helpful to organizations that need to support OpenGL applications, I hope the next step is to make the platform work with DirectX and other GPU APIs. That would enable countless other opportunities and use cases, including the ability to deliver a desktop that "feels normal" to average, everyday end users for a fraction of the price that it would cost to do that today. I have no idea if that's possible with this platform, so I'm off to figure that out!

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.