I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought! - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!

Written on May 07 2012
Filed under: , ,
18,801 views, 8 comments


by Brian Madden

Last week VMware announced View 5.1. One of the major new features is this thing called "View Storage Accelerator," or "VSA." In my article about View 5.1, I wrote that I wasn't particularly excited about VSA. It turns out I didn't fully understand how it worked.

If you're not familiar with VSA, it's the productized implementation of a feature of vSphere 5 called "Content-Based Read Cache." When enabled, the vSphere hypervisor allocates some amount of RAM to act as a read cache for disk blocks from VMDK files. The idea is that if the VM needs to read those disk blocks again, it can pull them from cache, thus (1) making the read operation really fast, and (2) saving precious IOs to the actual disk for other things.

This is cool on its own, but the real value comes from the fact that VSA can cache a single disk block across multiple VMDK files from multiple VMs. So if you're booting a whole bunch of VMs at the same time, you can potentially deal with your boot storm and not trash the disk since many of the boot-up blocks would be the same from VM to VM.

The error I made was that I thought VSA's ability to consolidate identical blocks from different VMs would only work if those VMs came from the same original disk (either via linked clones or snap shots). But this is incorrect. In fact, if VSA sees the same block anywhere in any VM, it can consolidate and cache it. In other words, even if you were to P2V a whole bunch of physical computers into VDI VMs, if VSA found identical disk blocks from multiple VMs, it could cache the single block and use it for as many VMs as it needed.

This is very cool for several reasons. But before I dig into why I like it, let's take a closer look at how exactly VSA works. In an email from last week, VMware's CTO for End User Computer Scott Davis explained it like this:

View Storage Accelerator is implemented as a vSCSI filter in the vSphere storage stack. It's a per VMDK operation that sits in between a virtual disk representation to the guest OS and the hypervisor file structure. Hence it is independent of VMFS, Linked Clones, etc. and is applicable to all images. The way it works is that a VMDK image that is the boot time OS for a View VM is registered with this filter at creation/clone time. On the first boot an initial SHA hash is calculated on the image contents and compared with a common cache. The common cache is organized with a multi-level index scheme that accounts for the content hash and the position in the boot image. The cache is also kept validated.

Assuming my description made sense without drawing a diagram, the picture you should have in mind is of a per VMDK level hash and index that is independent of where the blocks it represents actually come from and a common backend cache that services all of the VMDKs. [This is] why it is directly applicable to persistent/dedicated desktops as well as non-persistent desktops.

I also traded emails with Matt Eccleston, VMware's chief architect for View. He explained that this is why the technology VSA is based on is called "content-based reach cache." If you parse the term, "content-based" comes from the fact that it's based on the actual content of the blocks and isn't related to any type of master image or linked clone, and "read cache" means that it's only caching existing disk blocks to speed up reads. It doesn't nothing to help with writes. (Well, other than the fact that taking a lot of these reads off your primary storage might free up some IOs for more writes. This also means that you might be able to tune your storage for writes.)

Matt Eccleston also pointed out that VSA is probably not going to make as big a dent in steady-state read IOPS as it does in synchronized burst read IOPS. This is likely (we have a lot of data, but not all the answers!) due to the fact that steady-state read I/O has a higher percentage of blocks that are not common amongst VMs, thus are best handled by the Windows buffer cache, and VSA won't bother to even cache those blocks (a waste of RAM if unique blocks are cached in both the guest and by VSA). I mention this to address concerns of a subset of the community out there that likes to focus exclusively on steady-state IOPS. In my experience focusing on steady state IOPS alone is not sufficient. Sizing storage based on that alone can lead to a lot of trouble. The goal of VSA was not to improve steady-state IOPS (although we do see improvements in our labs for these use cases), but to reduce the impact of the worst case scenarios out there, which typically are the most devastating, and also where we see some rather extraordinary results with VSA.

Why VSA is cool

Obviously VSA can speed up multiple VM read-heavy operations like boot storms. But I really love the fact that it can work across identical blocks in completely unrelated VMs. Those who have read our latest book know that I love VDI for the 1-to-1 "personal" mode where each user has his or her own individual disk image. (My general feeling is that if you want shared desktops in a datacenter, you should use Remote Desktop Session Host and not VDI.) But View 5's VSA can handle that scenario, which makes me happy.

This is also why I don't love Citrix XenDesktop's IntelliCache or Quest vWorkspace's HyperCache features. While the exact mechanics and use cases for each of those is a bit different, today those two features only work for 1-to-many "shared image" VDI deployments. (Maybe comparing all three of these is a great future blog post?)

All that said, keep in mind that VSA only helps with reads. There are other storage products on the market that perform similar functions but that can also speed up writes. Also VSA doesn't replace the need for fast primary storage connected to each VDI host. (You can't mount a VMDK on an SMB-based NAS and hope to get great performance just because you've enabled VSA.)

So there you have it—VSA is much more promising than I initially thought. I'm looking forward to learning more about it when VMware releases View 5.1 (which I believe is scheduled for May 16).

 
 




Our Books


Comments

appdetective wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Mon, May 7 2012 12:46 PM Link To This Comment

If there is no clear roadmap for write I/O then this is half baked and a lock into vSphere. Not good enough to be enterprise class.

Phil Dalbeck wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Mon, May 7 2012 1:19 PM Link To This Comment

This is great news - my understanding of CBRC was also that it was on a per VMDK (or per parent/child) basis - thus making it a bit useless for multiple persistant disks.

I don't understand why VMware arent making more of this. View 5.1 appeared remarkably feature light on casual inspection, I'd have through that a genuinely useful feature like this would have been pushed harder and explained better in the release docs.

@ app - true that, - the all important write serialisation factor hasn't been addressed here at all. However while it might be locked into vSphere + View, its a nice 0 cost addition to the toolbag for those already running View environments that wasnt there before. I'd hope that this functionality will be expanded upon in future releases - maybe expanded to provide some write serialition features (CBWC?) ala Atlantis ILO, but without the bolton cost and lack of FT/HA.

rahvintzu wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Mon, May 7 2012 5:45 PM Link To This Comment

Correction: Intelli works for dedicated pools but only really optimises reads.

appdetective wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Mon, May 7 2012 8:25 PM Link To This Comment

@phil, that's what frustrates me about the VDI market. Too slow, why not leapfrog if the problem is understood. But then again Atlantis is also unproven broadly speaking.

gdekhayser wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Tue, May 8 2012 11:10 AM Link To This Comment

Just two notes:

"When enabled, the vSphere hypervisor allocates some amount of RAM to act as a read cache for disk blocks from VMDK files. " - Memory is the precious/scarce resource in a VM Host, so...is this the right way to go or should we simply be leveraging advanced SAN storage-based features to accomplish the same thing?

"On the first boot an initial SHA hash is calculated on the image contents and compared with a common cache."  - Doesn't this cause an I/O hit at this time? If there are a lot of P2V'd desktops...and since this is in RAM this would be non-persistent...if you re-booted the VM Host you'd cause this I/O burst every time.  

Narasimha Krishnakumar wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Wed, May 9 2012 8:43 PM Link To This Comment

@Phil,

View Storage Accelerator is on a per VMDK basis and provides great benefits when multiple persistent disks have common content. For example: If there are multiple users in the VDI environment that have common content such as a word document/powerpoint presentation stored on their own persistent disks, View Storage Accelerator will help speed up performance by caching the word document/powerpoint presentation in memory. To get a deep dive understanding of how View Storage Accelerator works, please visit my blog on View Storage Accelerator at:

blogs.vmware.com/.../optimizing-storage-with-view-storage-accelerator.html

blogs.vmware.com/.../view-storage-accelerator-in-practice.html

Phil Dalbeck wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Fri, May 18 2012 10:00 AM Link To This Comment

Thanks Narasimha, I'll have a dig there.

@gdekhayser - regards your first point, the key here for me is that technologies like VSA, coupled with virtualisation of cheap host DAS storage is removing the need for advanced SAN capabilities. Lets face it, VDI ain't cheap, so anything that reduces the up front and running costs is a good thing.

I'd also highlight that adding extra memory to a VM host isnt that expensive (certainly compared to providing lots of deduplicated cache hit IOPS at the SAN level) - I'd expect to hit a CPU bottleneck before I hit a memory bottleneck anyway - and as there isnt a vTax on vSphere for Desktops (which most deployments will be using for a serious View deployment) sticking some extra DIMMs in each box to cover a few gigs for CBRC space wouldnt worry me at all!

#Phil

Charles Gillanders wrote re: I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's way cooler than I thought!
on Fri, Nov 9 2012 10:28 AM Link To This Comment

I've seen high IO demands when using SRM either testing or performing an actual recovery.  Similar to the VDI scenario this is another example of multiple machines all booting at the same time.  I think this is another perfect use case for the same CBRC technology if only VMware would support it...

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.