10 ways storage can make your VDI project fail

Storage Decisions 2014 is taking place this week in NYC. I have a session today called "10 ways storage can make your VDI project fail," and I figured this would be good topic for an article.

Storage Decisions 2014 is taking place this week in NYC. I have a session today called "10 ways storage can make your VDI project fail," and I figured this would be good topic for an article. (If you're here at Storage Decisions, you can see my presentation live at 11:45 in the Track 2 breakout room.)

Much of this content should already be familiar to regular BrianMadden.com readers, though it's nice to have everything in one place.

Here are the ten points I’ll make in the presentation, in random order. Note that these points are geared towards storage professionals, so when I talk about "VDI People," I'm talking about us! :)

1. Forgetting that desktop storage is different than server storage

Even though VDI is about virtual machines running on servers in a datacenter, it’s important to remember that the storage being built needs to support desktop VMs, not server VMs, and (in terms of storage) desktops are very different than servers.

For example, desktop I/O is much more evenly split between read and write operations whereas server storage tends to focus on optimizing reads. Also, desktop storage tends to be more “bursty,” with lots of operations required in quick succession as users do things, and then almost no operations while users type or read their emails.

2. Thinking you can’t support persistent

Modern enterprise storage can support fully persistent VDI no problem. (“Fully persistent VDI” means that each VDI virtual machine is unique, so if you have 100 VDI users then you have 100 separate VDI images.)

The latest trends in VDI are for companies to build persistent VDI environments since that’s the way that desktop computing works today. (After all, 100 laptops users all have their own images, so why wouldn’t VDI be the same?)

When VDI first entered the scene about eight years ago, there was all this talk about how enterprises could use “non persistent” images where all the users shared a single “gold” master image. It was sold as a management miracle since IT would only have to manage a single image. They loved the idea of just updating one image instead of 100 each time a hotfix or service pack came out.

Unfortunately, it didn’t work (at least not back then). Sure, the storage end of things worked, but it turns out that moving from a fully-persistent desktop environment “before” VDI to a fully shared environment “after” VDI was too big of a desktop management change, causing many VDI projects to fail. (It was too bad because most people thought these projects failed because VDI sucks, when in reality the reason they failed had nothing to do with VDI—they failed because the companies couldn’t figure out how to manage their new locked down desktops.)

Fortunately in 2014 the storage vendors have solved this problem, thanks to technologies like inline-dedupe and block-level single-instance storage. This means that even though the VDI system might see 100 separate disk image files sitting on a disk, behind the scenes each block is only stored once and shared by all the images that need it. This is also great for performance, since caching can be done at the block level too. Hundreds of VDI users with hundreds of gigabytes of disk images might be able to get 80% of their reads cached with only a few gigabytes of cache.

Unfortunately even though these block-level dedupe technologies have been on the market for years (and even though just about every storage vendor can do this today), there’s still a perception by many people that storage can’t support persistent VDI images. The reality is that notion is just flat wrong, but even in 2014 I walk into countless VDI projects where the company is trying to shoehorn their desktop environment into a non-persistent shared image system, dooming their project to failure, because the storage people say they can’t support persistent.

3. Focusing on the “average” IOPS

We all know that IOPS are expensive, so one of the ways that companies try to justify the storage decisions they make for their VDI projects is to say, "Well, even though we think that each user needs 50 IOPS, we have economies of scale since all our users are in the datacenter, so we’ll just plan for an average of 10 IOPS per users since not all the users will need all those IOPS at the exact same time."

This is a guaranteed failure.

First, even 50 IOPS is too small for a desktop today. (After all, the slowest magnetic laptop hard drive is 60 IOPS, and yet we’re all putting SSDs in our laptops. So why would we even try to start with less than 100 or 200?)

Second, even though individual users might seem random, they’re all “random” in the same way. Everyone starts working at more-or-less the same time. They all return from lunch at the same time. They all have the same quarter-end deadlines.

We have to design storage systems for VDI so that every users can get 100+ IOPS at all times.

Again, modern storage technologies can handle this no problem. The issue arises when people try to leverage their existing old school storage investments rather than buying new (and appropriate storage) for VDI.

4. Being too focused on “hardware” versus “software” vendors

I’ve heard a lot of talk from people who have preconceived notions about whether “software”-based storage solutions are better than “hardware”-based solutions, with some customers limiting their choice of storage vendors based on this.

Well guess what? Unless your storage vendor physically manufactures hard drives or has a fab plant to make memory chips, they’re all software vendors. Sure, some vendors sell you appliances, but that’s more like the vessel for their storage. (It’s like milk. When you go to the store, you buy milk, and they include the plastic jug as a matter of convenience.)

So don’t be scared away by the perception that a “software” or “hardware” storage vendor is better or worse, because in 2014, they’re all software vendors.

5. Forgetting hosted blades

HP’s recent “Moonshot” line products are making a pretty big impact in the VDI space. If you haven’t heard of Moonshot, it’s basically a physical computing “cartridge” (like a modern day blade) which, when used with VDI, gives a 1-to-1 user-to-cartridge ratio. Users run their VDI instances directly on the bare metal, meaning each user has their own CPU, memory, and (wait for it...) storage!

One of the nice things about Moonshot is that it’s just like VDI except without all the difficult capacity planning. You don’t have to worry about shared desktop storage because there isn’t any.

Of course HP isn’t the only company doing this kind of thing, but they’re the ones who are in the news. But whether you use HP or another “one user per blade” solution, these types of solutions mean you don’t have to give more than a cursory thought to how your storage is handled.

6. Letting VDI people make storage a scapegoat

VDI practitioners (like me!) love to hate storage. This comes from the fact that the storage of 2006 (when VDI was invented) couldn’t really support what the VDI industry really wanted, but they ignored that and tried to press ahead with VDI anyway.

Fast forward to 2014, when storage can actually do what VDI wants (at a price VDI wants), but now we’re dealing with eight years of VDI people hating storage. So as a storage professional you have an uphill battle to fight, and I’ve found that every little hiccup in VDI performance and people are immediately assuming it’s because of storage before they actually collect all the facts.

7. Not knowing what you need

There’s that old expression about throwing sh*t at the wall and seeing what sticks. Based on the VDI projects I’ve seen, I’d say that accurately reflects the “storage strategy” of a good many VDI projects.

Obviously this is bad, both in cases where storage requirements for VDI are vastly underestimated and vastly over estimated.

The reality is that since users’ “pre” VDI environment is based on hundreds of individual computers with hundreds of individual hard driver scattered all over the place, customers just flat out don’t know how many IOPS their VDI users will need. The only way to know for sure is to use a real assessment tool that can install some instrumentation on a “pre-VDI” desktop and collect data for a month or two. Unfortunately even though they’ll spend millions of dollars on a VDI project, most customers don’t see the need for the added expense of getting good data going in. (And just like deferring routine health care leads to more expensive medical treatments down the road, not truly understanding your company’s desktop storage needs up front just leads to overspending or underperforming VDI environments.)

8. Freaking out at capacity requirements

If I had a dollar for every time I saw a storage vendor’s presentation that focused on storage capacity, I would be a rich man!

Seriously I can’t understand how they get away with this. You see presentations saying things like, “All your traditional desktops and laptops have 500GB hard drives, so if you want to do 1000 users for VDI then you need to figure out how to support 500TB of storage!”

First, laptop users don’t count, because no one is converting a laptop user to a VDI user. (If a user has a laptop it’s because they have to work offline and from other who-knows-where locations, and those are not the kind of users you want to convert to VDI.)

And for the desktops users that companies do consider for VDI, remember that all of their files and user profiles and stuff are stored on network file shares, and with VDI, that doesn’t change! Seriously, you don’t need to figure out how your VDI storage is going to hold 100 gigs of My Documents per user because those files are not going into your VDI system.

9. Not using modern storage technology

I’ve already touched on this a bit, but it’s important and deserves its own mention. The storage solutions of today’s world are very different than what was available even two years ago. Modern storage can support VDI (even fully persistent VDI) with a level or performance and at a price point that was unachievable a few years ago. If you bought the storage system you want to use for VDI more than two years ago, it won’t work. (Though even if you bought a storage solution that was bundled with hardware, you might be able to do a software upgrade to add the modern features that VDI needs.)

10. Being scared of storage

Again, in 2014 we have the storage for VDI problem licked. The vendors are awesome. The products are awesome. VDI is awesome. Sure, VDI today is still not (and never will be) the right solution for everyone. Heck, it may never see more than 10% overall penetration (which is fine)!

The most important thing today though is that when you approach a VDI project, you should not be afraid of storage. Today’s storage solutions can deliver amazing performance for well under $100 per user (software and hardware), with many turnkey solutions coming in at under $50 per user. So don’t let storage ruin your VDI project, and certainly don’t be afraid of VDI storage in today’s world!

Join the conversation

9 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

7. Not knowing what you need.


I see this as the biggest killer in the partner/SI space when they are trying to deliver VDI.


These guys are all about fixed price contracts and spending as little as possible to make a buck or two. These are also the guys who (IMHO) are doing the lions share of large (>1000)  seat VDI projects.


This is a recipe for disaster as at the bid stage they are reticent (for good reason) to spend any money on really understanding exactly what they are bidding for.  If they win the business they then have to build a solution to an agreed cost, which is the cost which didn't really include any analysis work.  At the delivery stage they then have to cut serious corners in order to deliver on time and within budget, so what corners get cut...yep, all of the discovery, analysis and planning which would have made the project successful. The real killer is that if they had been honest and told their customer about the analysis work required and how much this would cost, they would have scared the customer witless and lost the deal to the competition, who were less honest and will probably end up out of pocket once things have gone seriously wrong !!


So there's another major reason why VDI projects fail.  A large percentage of the big ones are delivered by companies whose only incentive is to deliver as quickly as possible and within a tight budget, without bothering to gather the data they need to do things properly !!


Cancel

A minor correction for you Brian.


The Moonshot with the m700 carts and XenDesktop support up to 4 to 1 user to cartridge ratio.  


The m700 has four AMD APUs.  Each can run a single Win7 instance.  There are 45 carts per moonshot unit for a total of 180 Win7 instances per chassis.  


Cancel

One point on number 2.  As an implementer I am not seeing any form of major push for a persistent environment.


I would say 90-95% of all desktops, either desktop or server os have been non-persistent.  Even down to developers.


So persistent is more the exception in the world, not the rule.


Cancel

@stucco WTF are you talking about. Persistent is the easiest way to start. Non-persistent only starting to become viable for the masses unless you count XenApp.


Cancel

@Stucco


Have to agree with appdetective, all of the large (5-30K seat) deployments I've seen which use VDI have been persistent.


It's easier just to move the problem to the data centre and keep using all of your old tried and tested desktop and app management tools.


From a political standpoint however, once you've taken that 1st persistent step, hearts and minds have been won and the non-persistent use case is easier to sell !!


Cancel

With PVS for XenApp/XenDesktop how can it not be easier?  Standard corporate build all the apps necessary to do their job, bang out the door.


Now with View and the overhead for recompose, then yes, persistent is way easier.  But with the PVS system you can go from 1 desktop to 10,000 in less than an hour.


I know because I have done it, many many many times.


And it make DR of your environment even easier, replicate the PVS store to another site, setup another XA/XD studio, publish some more desktops, bang, active:active, or active:passive.


Also, the majority of VDI being rolled out is still XenApp desktops.  Why not, the only need for Windows 7 is based on software or hardware limitations, but that is still in the minority of needs for the day to day users.


What applications do users really need to setup in their environments?  And if end users need these applications, why aren't they being handled by IT?


Cancel

@appdetective @help4ctx


I'm with stucco :)


Seen way more NP than P VDI, esp when including XenApp, which I see no reason not to, but also when looking at VDI alone.


Arguably those who opt for NP VDI were likely better suited with XenApp to begin with but that is a whole other discussion.


Cancel

I think the discussion is specifically about HVD and not HSD.


The use of XenApp is another matter altogether, and where applicable I see this used in vastly greater numbers than non-persistent VDI.


The balance is changing though, as parts of the article intimate !!


Cancel

Gentlemen


To understand the Brian's position, I recommend to read The New VDI Reallity. He uses VDI to refers HVD, not HSD (RDSH as used).


Brian,


Excellent article!! But I must confess that I find it hard to accept your recommendation to address the desktops virtualization  from a perspective of persistent VDI.


The benefits of a single (a few in the worst case) image is enough to meet the challenges posed by this transformation.


Of course, the benefits of new storage technologies are very valuable. Even in non persistent VDI and RDSH scenarios.


Regards


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close