A month into 2016, I’ve realized that every speech and webcast I’ve done so far this year was about Hypercovered Infrastructure (HCI) and why it makes sense for VDI. HCI is a hardware/software solution (like from Nutanix, Simplivity, Atlantis, and others) where you use small, scalable, self-contained nodes that each contain CPU and disk. All the nodes seamlessly work together to create what’s essentially a datacenter-in-a-box, and you can add nodes at any time and the environment grows to include them. All the hardware management is done automatically, and you essentially have a huge pool of CPU and memory that you can slice-and-dice into VMs as you see fit.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
I’ll admit that I was skeptical of HCI at first. I kind of thought, “Well it’s great that this hardware is easy to use, but really you’re trying to sell metal boxes that get installed on-premises when clearly the entire world is moving towards the cloud. So… good luck with all that!"
A year ago I wrote the article, Here’s the single reason enterprises aren’t flocking to DaaS. (It’s still true today.) The gist is that the enterprise desktop is not an app, rather it is a complex orchestrated mashup of applications, data, server connections, file shares, scripts, domain controllers, profiles, policies, and about a million other moving pieces. While it’s trivial to move an “app” to the cloud like email, file sharing, or CRM, moving an entire enterprise desktop to the cloud is brutal. (Just getting a desktop up and running in the cloud is simple. Now how do users log in? Do you move a domain controller there? What about their files? What about their servers? Do you move those too? What about the users who are left behind? Do they access servers from the cloud? Do you run servers in both places and replicate?)
It’s too bad, too, because now that VDI is much more capable thanks to cheap storage that supports persistent images, real GPUs in servers, and modern app management that makes non-persistent images possible, VDI is actually quite usable. (Not to say that VDI is appropriate for every situation, but rather that in 2016, if you want to do VDI, the technology won’t let you down.)
This is where HCI comes in.
HCI gives you the ability to have many of the benefits of cloud-hosted computing (linear scalability, simple configuration) in a box you put on premises (which, for desktops, is crucial). HCI solutions today can also support persistent disk images for VDI, Teradici APEX cards, and Nvidia GRID GPUs.
The only downside I hear is that, “but this is a rip-and-replace” solution, meaning that HCI hardware is its own contained thing and you can’t really leverage what you’ve already bought. Though when you’re building a VDI environment, you’re typically buying all new hardware anyway, so I don’t see that as a big showstopper. (And besides, most of the HCI vendors let you integrate existing servers as compute nodes if you want to, so that argument doesn’t really hold water.)
But seriously, HCI is about on-premises. Does that really make sense today? When in comes to VDI in the enterprise, yes. It’s a must. But even in the broader scope, it tends to make sense. I go back to Benny Tritsch’s prediction from a few years ago about the pendulum of cloud-versus-on-premises swinging back towards on-prem. The problem with the cloud is that while it’s cheap, it’s also not flexible. Cloud vendors make their money by making all their services as uniform as possible. (“Rack-em and stack-em!”) But as soon as you start adding customizations, many of the advantages of the cloud sublimate.
We can use Nicholas Carr’s famous “electricity as a service” example from his 2008 book, The Big Switch, which has become a classic argument for the cloud. In it, Carr argued that computing was like electricity. In the old days (100 years ago), factories that needed electricity bought their own generators and made their own electricity. Then as electricity became more important, electrical utilities were created and people bought their electricity as a service. (EaaS?) Doing so was cheaper and more reliable than factories all generating their own electricity, just like cloud services were for computing.
But since 2008, what’s happened? Now we have solar and wind power, and millions of prior electrical service customers (like me since 2009) have installed solar and are generating their own electricity. Why? It’s cheaper in the long run, fallen trees across the street don’t cut my power, and I don’t have to worry about peak usage fees and Smart Meters and all the politics of buying my electricity from a for-profit utility provider.
The same is true for on-premises computing. Maybe it took a few years of cloud hype to make us appreciate the value of having a basement full of metal boxes. But they’re my boxes, and they sit right next to my servers and my users. VDI is complex enough, and locating my servers on-premises removes a whole slew of logistical complexity versus putting users in the cloud. Toss in HCI and you’ve got a solution that’s scalable in small chunks, can grow from a few hundred to tens of thousands of users, and is as easy to manage as any cloud-based infrastructure console.
To be clear, I’m not suggesting the cloud doesn’t have its place or that all applications should be local. But when it comes to VDI, keep it in the cave, not the cloud. And use HCI.