Storage Decisions 2014 is taking place this week in NYC. I have a session today called "10 ways storage can make your VDI project fail," and I figured this would be good topic for an article. (If you're here at Storage Decisions, you can see my presentation live at 11:45 in the Track 2 breakout room.)
Much of this content should already be familiar to regular BrianMadden.com readers, though it's nice to have everything in one place.
Here are the ten points I’ll make in the presentation, in random order. Note that these points are geared towards storage professionals, so when I talk about "VDI People," I'm talking about us! :)
1. Forgetting that desktop storage is different than server storage
Even though VDI is about virtual machines running on servers in a datacenter, it’s important to remember that the storage being built needs to support desktop VMs, not server VMs, and (in terms of storage) desktops are very different than servers.
For example, desktop I/O is much more evenly split between read and write operations whereas server storage tends to focus on optimizing reads. Also, desktop storage tends to be more “bursty,” with lots of operations required in quick succession as users do things, and then almost no operations while users type or read their emails.
2. Thinking you can’t support persistent
Modern enterprise storage can support fully persistent VDI no problem. (“Fully persistent VDI” means that each VDI virtual machine is unique, so if you have 100 VDI users then you have 100 separate VDI images.)
The latest trends in VDI are for companies to build persistent VDI environments since that’s the way that desktop computing works today. (After all, 100 laptops users all have their own images, so why wouldn’t VDI be the same?)
When VDI first entered the scene about eight years ago, there was all this talk about how enterprises could use “non persistent” images where all the users shared a single “gold” master image. It was sold as a management miracle since IT would only have to manage a single image. They loved the idea of just updating one image instead of 100 each time a hotfix or service pack came out.
Unfortunately, it didn’t work (at least not back then). Sure, the storage end of things worked, but it turns out that moving from a fully-persistent desktop environment “before” VDI to a fully shared environment “after” VDI was too big of a desktop management change, causing many VDI projects to fail. (It was too bad because most people thought these projects failed because VDI sucks, when in reality the reason they failed had nothing to do with VDI—they failed because the companies couldn’t figure out how to manage their new locked down desktops.)
Fortunately in 2014 the storage vendors have solved this problem, thanks to technologies like inline-dedupe and block-level single-instance storage. This means that even though the VDI system might see 100 separate disk image files sitting on a disk, behind the scenes each block is only stored once and shared by all the images that need it. This is also great for performance, since caching can be done at the block level too. Hundreds of VDI users with hundreds of gigabytes of disk images might be able to get 80% of their reads cached with only a few gigabytes of cache.
Unfortunately even though these block-level dedupe technologies have been on the market for years (and even though just about every storage vendor can do this today), there’s still a perception by many people that storage can’t support persistent VDI images. The reality is that notion is just flat wrong, but even in 2014 I walk into countless VDI projects where the company is trying to shoehorn their desktop environment into a non-persistent shared image system, dooming their project to failure, because the storage people say they can’t support persistent.
3. Focusing on the “average” IOPS
We all know that IOPS are expensive, so one of the ways that companies try to justify the storage decisions they make for their VDI projects is to say, "Well, even though we think that each user needs 50 IOPS, we have economies of scale since all our users are in the datacenter, so we’ll just plan for an average of 10 IOPS per users since not all the users will need all those IOPS at the exact same time."
This is a guaranteed failure.
First, even 50 IOPS is too small for a desktop today. (After all, the slowest magnetic laptop hard drive is 60 IOPS, and yet we’re all putting SSDs in our laptops. So why would we even try to start with less than 100 or 200?)
Second, even though individual users might seem random, they’re all “random” in the same way. Everyone starts working at more-or-less the same time. They all return from lunch at the same time. They all have the same quarter-end deadlines.
We have to design storage systems for VDI so that every users can get 100+ IOPS at all times.
Again, modern storage technologies can handle this no problem. The issue arises when people try to leverage their existing old school storage investments rather than buying new (and appropriate storage) for VDI.
4. Being too focused on “hardware” versus “software” vendors
I’ve heard a lot of talk from people who have preconceived notions about whether “software”-based storage solutions are better than “hardware”-based solutions, with some customers limiting their choice of storage vendors based on this.
Well guess what? Unless your storage vendor physically manufactures hard drives or has a fab plant to make memory chips, they’re all software vendors. Sure, some vendors sell you appliances, but that’s more like the vessel for their storage. (It’s like milk. When you go to the store, you buy milk, and they include the plastic jug as a matter of convenience.)
So don’t be scared away by the perception that a “software” or “hardware” storage vendor is better or worse, because in 2014, they’re all software vendors.
5. Forgetting hosted blades
HP’s recent “Moonshot” line products are making a pretty big impact in the VDI space. If you haven’t heard of Moonshot, it’s basically a physical computing “cartridge” (like a modern day blade) which, when used with VDI, gives a 1-to-1 user-to-cartridge ratio. Users run their VDI instances directly on the bare metal, meaning each user has their own CPU, memory, and (wait for it...) storage!
One of the nice things about Moonshot is that it’s just like VDI except without all the difficult capacity planning. You don’t have to worry about shared desktop storage because there isn’t any.
Of course HP isn’t the only company doing this kind of thing, but they’re the ones who are in the news. But whether you use HP or another “one user per blade” solution, these types of solutions mean you don’t have to give more than a cursory thought to how your storage is handled.
6. Letting VDI people make storage a scapegoat
VDI practitioners (like me!) love to hate storage. This comes from the fact that the storage of 2006 (when VDI was invented) couldn’t really support what the VDI industry really wanted, but they ignored that and tried to press ahead with VDI anyway.
Fast forward to 2014, when storage can actually do what VDI wants (at a price VDI wants), but now we’re dealing with eight years of VDI people hating storage. So as a storage professional you have an uphill battle to fight, and I’ve found that every little hiccup in VDI performance and people are immediately assuming it’s because of storage before they actually collect all the facts.
7. Not knowing what you need
There’s that old expression about throwing sh*t at the wall and seeing what sticks. Based on the VDI projects I’ve seen, I’d say that accurately reflects the “storage strategy” of a good many VDI projects.
Obviously this is bad, both in cases where storage requirements for VDI are vastly underestimated and vastly over estimated.
The reality is that since users’ “pre” VDI environment is based on hundreds of individual computers with hundreds of individual hard driver scattered all over the place, customers just flat out don’t know how many IOPS their VDI users will need. The only way to know for sure is to use a real assessment tool that can install some instrumentation on a “pre-VDI” desktop and collect data for a month or two. Unfortunately even though they’ll spend millions of dollars on a VDI project, most customers don’t see the need for the added expense of getting good data going in. (And just like deferring routine health care leads to more expensive medical treatments down the road, not truly understanding your company’s desktop storage needs up front just leads to overspending or underperforming VDI environments.)
8. Freaking out at capacity requirements
If I had a dollar for every time I saw a storage vendor’s presentation that focused on storage capacity, I would be a rich man!
Seriously I can’t understand how they get away with this. You see presentations saying things like, “All your traditional desktops and laptops have 500GB hard drives, so if you want to do 1000 users for VDI then you need to figure out how to support 500TB of storage!”
First, laptop users don’t count, because no one is converting a laptop user to a VDI user. (If a user has a laptop it’s because they have to work offline and from other who-knows-where locations, and those are not the kind of users you want to convert to VDI.)
And for the desktops users that companies do consider for VDI, remember that all of their files and user profiles and stuff are stored on network file shares, and with VDI, that doesn’t change! Seriously, you don’t need to figure out how your VDI storage is going to hold 100 gigs of My Documents per user because those files are not going into your VDI system.
9. Not using modern storage technology
I’ve already touched on this a bit, but it’s important and deserves its own mention. The storage solutions of today’s world are very different than what was available even two years ago. Modern storage can support VDI (even fully persistent VDI) with a level or performance and at a price point that was unachievable a few years ago. If you bought the storage system you want to use for VDI more than two years ago, it won’t work. (Though even if you bought a storage solution that was bundled with hardware, you might be able to do a software upgrade to add the modern features that VDI needs.)
10. Being scared of storage
Again, in 2014 we have the storage for VDI problem licked. The vendors are awesome. The products are awesome. VDI is awesome. Sure, VDI today is still not (and never will be) the right solution for everyone. Heck, it may never see more than 10% overall penetration (which is fine)!
The most important thing today though is that when you approach a VDI project, you should not be afraid of storage. Today’s storage solutions can deliver amazing performance for well under $100 per user (software and hardware), with many turnkey solutions coming in at under $50 per user. So don’t let storage ruin your VDI project, and certainly don’t be afraid of VDI storage in today’s world!