I was wrong about how VMware View 5.1's new "Storage Accelerator" works. It's cooler than I thought!

Last week VMware announced View 5.1. One of the major new features is this thing called "View Storage Accelerator," or "VSA.

Last week VMware announced View 5.1. One of the major new features is this thing called "View Storage Accelerator," or "VSA." In my article about View 5.1, I wrote that I wasn't particularly excited about VSA. It turns out I didn't fully understand how it worked.

If you're not familiar with VSA, it's the productized implementation of a feature of vSphere 5 called "Content-Based Read Cache." When enabled, the vSphere hypervisor allocates some amount of RAM to act as a read cache for disk blocks from VMDK files. The idea is that if the VM needs to read those disk blocks again, it can pull them from cache, thus (1) making the read operation really fast, and (2) saving precious IOs to the actual disk for other things.

This is cool on its own, but the real value comes from the fact that VSA can cache a single disk block across multiple VMDK files from multiple VMs. So if you're booting a whole bunch of VMs at the same time, you can potentially deal with your boot storm and not trash the disk since many of the boot-up blocks would be the same from VM to VM.

The error I made was that I thought VSA's ability to consolidate identical blocks from different VMs would only work if those VMs came from the same original disk (either via linked clones or snap shots). But this is incorrect. In fact, if VSA sees the same block anywhere in any VM, it can consolidate and cache it. In other words, even if you were to P2V a whole bunch of physical computers into VDI VMs, if VSA found identical disk blocks from multiple VMs, it could cache the single block and use it for as many VMs as it needed.

This is very cool for several reasons. But before I dig into why I like it, let's take a closer look at how exactly VSA works. In an email from last week, VMware's CTO for End User Computer Scott Davis explained it like this:

View Storage Accelerator is implemented as a vSCSI filter in the vSphere storage stack. It's a per VMDK operation that sits in between a virtual disk representation to the guest OS and the hypervisor file structure. Hence it is independent of VMFS, Linked Clones, etc. and is applicable to all images. The way it works is that a VMDK image that is the boot time OS for a View VM is registered with this filter at creation/clone time. On the first boot an initial SHA hash is calculated on the image contents and compared with a common cache. The common cache is organized with a multi-level index scheme that accounts for the content hash and the position in the boot image. The cache is also kept validated.

Assuming my description made sense without drawing a diagram, the picture you should have in mind is of a per VMDK level hash and index that is independent of where the blocks it represents actually come from and a common backend cache that services all of the VMDKs. [This is] why it is directly applicable to persistent/dedicated desktops as well as non-persistent desktops.

I also traded emails with Matt Eccleston, VMware's chief architect for View. He explained that this is why the technology VSA is based on is called "content-based reach cache." If you parse the term, "content-based" comes from the fact that it's based on the actual content of the blocks and isn't related to any type of master image or linked clone, and "read cache" means that it's only caching existing disk blocks to speed up reads. It doesn't nothing to help with writes. (Well, other than the fact that taking a lot of these reads off your primary storage might free up some IOs for more writes. This also means that you might be able to tune your storage for writes.)

Matt Eccleston also pointed out that VSA is probably not going to make as big a dent in steady-state read IOPS as it does in synchronized burst read IOPS. This is likely (we have a lot of data, but not all the answers!) due to the fact that steady-state read I/O has a higher percentage of blocks that are not common amongst VMs, thus are best handled by the Windows buffer cache, and VSA won't bother to even cache those blocks (a waste of RAM if unique blocks are cached in both the guest and by VSA). I mention this to address concerns of a subset of the community out there that likes to focus exclusively on steady-state IOPS. In my experience focusing on steady state IOPS alone is not sufficient. Sizing storage based on that alone can lead to a lot of trouble. The goal of VSA was not to improve steady-state IOPS (although we do see improvements in our labs for these use cases), but to reduce the impact of the worst case scenarios out there, which typically are the most devastating, and also where we see some rather extraordinary results with VSA.

Why VSA is cool

Obviously VSA can speed up multiple VM read-heavy operations like boot storms. But I really love the fact that it can work across identical blocks in completely unrelated VMs. Those who have read our latest book know that I love VDI for the 1-to-1 "personal" mode where each user has his or her own individual disk image. (My general feeling is that if you want shared desktops in a datacenter, you should use Remote Desktop Session Host and not VDI.) But View 5's VSA can handle that scenario, which makes me happy.

This is also why I don't love Citrix XenDesktop's IntelliCache or Quest vWorkspace's HyperCache features. While the exact mechanics and use cases for each of those is a bit different, today those two features only work for 1-to-many "shared image" VDI deployments. (Maybe comparing all three of these is a great future blog post?)

All that said, keep in mind that VSA only helps with reads. There are other storage products on the market that perform similar functions but that can also speed up writes. Also VSA doesn't replace the need for fast primary storage connected to each VDI host. (You can't mount a VMDK on an SMB-based NAS and hope to get great performance just because you've enabled VSA.)

So there you have it—VSA is much more promising than I initially thought. I'm looking forward to learning more about it when VMware releases View 5.1 (which I believe is scheduled for May 16).

Dig Deeper on Desktop Virtualization Storage

8 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close