A few weeks ago, Quest released vWorkspace 7.5, which is a pretty massive update for only being a half-release. In the release, they rewrote the Web Interface while adding support for Citrix farms (since they see mixed environments and those users wanted one interface to apps), integrated the Desktop Optimizer into the management console, and developed a solution that allows people to use Microsoft Communicator or Lync across the WAN, among other things. Perhaps the biggest thing, though, is their Catalyst components.
Catalyst is the term Quest is using to reference two tools that they've developed for optimizing Hyper-V for virtual desktops (or virtual RDSH servers). The tools, dubbed HyperCache and HyperDeploy, add functionality to Hyper-V that allows it to cache frequently used storage blocks in memory, while HyperDeploy is sort of like a disk streaming solution (but not really) in that it allows you to boot VMs before the virtual disk is 100% copied. You can read more about it in Michel Roth's explanation.
The combination of these tools results in a surprising performance increase when compared to native Hyper-V and to other platforms. Quest posted the results of their testing in the form of a PDF, and while it's vendor-created, it at the very least shows that Catalyst has a positive impact on things. In the document, they outline their test procedures for each platform, so you can evaluate the validity of the results yourself. They did use Login VSI, which is the industry standard for VDI benchmarking and testing.
HyperCache works by dedicating part of the host server's RAM to serve as a storage cache. In this cache is stored the most used blocks from the golden (master shared) image. This means that requests for those blocks are served from memory, which drastically reduces boot storms (which are mostly read-heavy) and increases boot time. The size of HyperCache is only 800MB by default, but that can be changed. According to Quest, the entire Windows 7 boot process only amounts to about 350MB of data in their tests, so the cache is more than enough.
Disk reads are easier to optimize because the system can identify frequently used blocks and cache them, and if they're not in the cache the request will go directly to disk. Writes, on the other hand, are more complex because they are completely unpredictable. For writes, HyperCache uses a technique called serialized writes, which is a fancy way of saying it dumps data to disk as it gets it in a straight line, rather than randomly seeking out the proper place on the drive and decreasing write speed while the heads float around.
I look at this as a way to use local storage for VDI, but in reality the technology works equally well with any storage backend. It also works with any OS that is hosted on Hyper-V, with one main exception: It only works with shared images.
While HyperCache works at the block level, it is only aware of the blocks in a specific VHD file. With a shared image, this is fine because the base OS master image is the same across the board. In that situation, HyperCache & HyperDeploy shine. However, most people that we talk to are using VDI in a persistent manner where each user has their own VM, rather than having a VM compiled for them based on different disk images living on top of a single, master OS disk image. In that situation, even though 80% of the blocks may be the same between VMs, HyperCache doesn't work.
Brian famously wrote in 2009 that if your environment is simple enough to run shared image VDI, then why not just run TS? I'm of the opinion that, while this is true if all other things are equal, there are still benefits of running VDI with a shared image in spite of the added expense. Things like:
- The OS is more familiar to the user
- Better peripheral support
- The same skill set can manage traditional desktops and virtual desktops
Brian wasn't terribly impressed with HyperCache because of the shared-only issue, which I think is less of an issue that in would've been in 2009. I think that the VDI experience is better now, which can, in certain situations, outweigh the cost difference. This isn't about comparing VDI to TS. I'm assuming if you've chosen VDI that you've already done that. So, if you've already chosen VDI and are now comparing solutions, I think vWorkspace with HyperCache is worth a good, hard look.
Brian and I just had an IM conversation that I really wish we could've had on our podcast, but I'll just put the transcript here:
|Brian: But HyperCache doesn't do anything you couldn't have done anyway|
|Gabe: How can you do that now for no additional cost?|
|Brian: They just do it with software only instead of Fusion-IO or SSD. So it makes it cheaper, sure, but not game changing|
|Gabe: Yeah, but this is with a free hypervisor, and the same old vWorkspace, so it's a TON cheaper|
|Brian: Well, yeah if you already have this, sure. I'm not saying it's bad|
|Gabe: Or if you're starting VDI from scratch, which most people are|
|Brian: It's just not a game changer. Just post this chat as the article|
That's the actual conversation we had! So I guess it appears that Brian is underwhelmed, and that I'm of the opinion that HyperCache is a cool feature that, while sort of incomplete, really enhances vWorkspace. I think that, if and when a version comes out that supports any type of VHD, it could be an amazing, game changing solution. Yes, you can do better by purchasing a "real" storage solution, but I challenge you to find something like this for no additional charge.
What do you think? Game changing addition, non-starter, or not quite enough?