(Be sure to check out Ron Oglesby's rebuttal to this: Brian's wrong...about VDI and local storage)
When discussing storage in the context of VDI, people often talk about things like SAN sizing and IOPS and linked clones and thin provisioning and disk image streaming and... the list goes on. But one of the most important aspects of storage design for VDI has to do with the where the disk image files live for each user’s Windows VM.
What exactly is “storage” for VDI?
I guess before we talk too much about storage for VDI, we should define “storage.” I mean obviously we’re talking about disks and SANs and stuff, but in the context of VDI, “storage” applies to:
- Windows OS disk image file location. Where is the system disk image file (VHD or VMDK) for each VDI virtual machine?
- User data. Home drives, application storage, etc. This could also include user environment settings.
- Applications. Where do the applications live once they’re installed?
So when we’re talking about storage for VDI, which “storage” are we talking about? For the user data, apps, and backup, whether you’re using VDI or not doesn’t really impact how your storage is designed. (After all, you’re still going to store home drives on a NAS or file server regardless of whether you’re using VDI or not.)
The big discussion point around storage for VDI has to do with the OS disk image file locations for each VM. Where will the actual VHD or VMDK files that make up each user’s VM be stored? There are a few options:
- Each VHD/VMDK is stored locally on the VM host server.
- Each VHD/VMDK is stored on a SAN, the VM host server mounts them via FC HBA or iSCSI
- Each VHD/VMDK is stored on a file server / NAS and mounted/streamed across the network
- Some other technology is used to present/build the disk image, like Atlantis, Unidesk, etc.
There’s a lot that goes into this decision: Will your users share a single master disk image that “resets” each time they log off, or does each user have his or her own personal persistent image? How many VM host servers do you have? Are you designing your VDI environment specifically for desktops, or are you taking what you did for server virtualization and just copying for your desktops?
VDI disk image storage
Most of us have learned that the biggest constraint / bottleneck for desktop disk image storage is not storage capacity, but IOPS. (Read Ruben & Herco’s amazing article for more on this.) This problem is magnified in environments where you have shared master images since you now have many users (tens? hundreds?) accessing a single master file, and if you thought your own personal hard drive could get bogged down, imagine 200 people sharing the same bits on the same drive! Various SAN-based “solutions” exist to address this, like storing the master file in some very fast way (SSD, striped, cache) or by making lots of full replicas of it so that each master file is only accessed by a subset of your users.
But there’s another great way to “fix” your master disk file oversubscription problem: Don’t mount your shared files from the SAN! Instead you can store your master files locally on each VM host server.
Don’t get me wrong: I’m all about using a SAN were it makes sense. But storing “disposable” stuff on the SAN does not make sense. (It’s like a gold-lined trash bag.) If you’re just going to throw it away, why even bother storing it in the most expensive place you have?
Of course this means that you’ll need to load-up your servers with drives, but eight 2.5” 15k SAS drives aren’t too much more expensive than a Fibre Channel HBA. (And of course the local drive option doesn’t require a SAN, so it’s actually much cheaper.) Choosing local storage doesn't prevent you from using things like thin provisioning and linked clones--it just means that you'll need one master on each host (the management of which is scriptable and much cheaper than a SAN).
Do SANs ever make sense for boot disk image storage?
I’m making a pretty strong point that in the vast majority of cases, it doesn’t make sense to boot your desktop images from a SAN. But that’s not to suggest that booting from a SAN never makes sense. I just think that booting a desktop from a SAN doesn’t make sense.
If I had a truly flexible virtual datacenter infrastructure where all my servers were disk images and I wanted to be able to boot any image from any VM host and to do live migration and everything, then yes, absolutely, booting those disk images from a SAN makes sense.
If I had a bunch of servers that grow and shrink and need to move on demand, then yes, booting from the SAN makes sense.
But for desktops, when I’m not using live migration and when I’m in “rack ‘em and stack ‘em” mode, I can’t possibly see how it makes sense to boot them from a SAN. Even if I had a scenario where each user had his or her own personal persistent desktop, I still think I’d store those on a NAS and stream them down to the VM with Citrix Provisioning Services, Doubletake Flex, or Wyse Streaming Manager. All of those would allow me to move the IOPS to the VM host where they’re much cheaper than on the SAN.
Maybe this will change in the future?
For the record, I love Chetan’s grand vision about us someday being able to deliver a better desktop via VDI than local. But that’s not here today. If a compelling reason—like a huge performance bump—emerges that requires a SAN, then of course I’ll reconsider. But right now the only thing a SAN tends to introduce to most VDI deployments is increased cost.
What do you think? Do you boot your desktops from a SAN? If so, why?