vWorkspace 7.5's HyperCache is cool, but it only works for shared image VDI? Is that a show-stopper?

A few weeks ago, Quest released vWorkspace 7.5, which is a pretty massive update for only being a half-release.

A few weeks ago, Quest released vWorkspace 7.5, which is a pretty massive update for only being a half-release. In the release, they rewrote the Web Interface while adding support for Citrix farms (since they see mixed environments and those users wanted one interface to apps), integrated the Desktop Optimizer into the management console, and developed a solution that allows people to use Microsoft Communicator or Lync across the WAN, among other things. Perhaps the biggest thing, though, is their Catalyst components.

Catalyst is the term Quest is using to reference two tools that they've developed for optimizing Hyper-V for virtual desktops (or virtual RDSH servers). The tools, dubbed HyperCache and HyperDeploy, add functionality to Hyper-V that allows it to cache frequently used storage blocks in memory, while HyperDeploy is sort of like a disk streaming solution (but not really) in that it allows you to boot VMs before the virtual disk is 100% copied. You can read more about it in Michel Roth's explanation.

The combination of these tools results in a surprising performance increase when compared to native Hyper-V and to other platforms. Quest posted the results of their testing in the form of a PDF, and while it's vendor-created, it at the very least shows that Catalyst has a positive impact on things. In the document, they outline their test procedures for each platform, so you can evaluate the validity of the results yourself. They did use Login VSI, which is the industry standard for VDI benchmarking and testing.

HyperCache works by dedicating part of the host server's RAM to serve as a storage cache. In this cache is stored the most used blocks from the golden (master shared) image. This means that requests for those blocks are served from memory, which drastically reduces boot storms (which are mostly read-heavy) and increases boot time. The size of HyperCache is only 800MB by default, but that can be changed. According to Quest, the entire Windows 7 boot process only amounts to about 350MB of data in their tests, so the cache is more than enough.

Disk reads are easier to optimize because the system can identify frequently used blocks and cache them, and if they're not in the cache the request will go directly to disk. Writes, on the other hand, are more complex because they are completely unpredictable. For writes, HyperCache uses a technique called serialized writes, which is a fancy way of saying it dumps data to disk as it gets it in a straight line, rather than randomly seeking out the proper place on the drive and decreasing write speed while the heads float around.

I look at this as a way to use local storage for VDI, but in reality the technology works equally well with any storage backend. It also works with any OS that is hosted on Hyper-V, with one main exception: It only works with shared images.

While HyperCache works at the block level, it is only aware of the blocks in a specific VHD file. With a shared image, this is fine because the base OS master image is the same across the board. In that situation, HyperCache & HyperDeploy shine. However, most people that we talk to are using VDI in a persistent manner where each user has their own VM, rather than having a VM compiled for them based on different disk images living on top of a single, master OS disk image. In that situation, even though 80% of the blocks may be the same between VMs, HyperCache doesn't work.

Brian famously wrote in 2009 that if your environment is simple enough to run shared image VDI, then why not just run TS? I'm of the opinion that, while this is true if all other things are equal, there are still benefits of running VDI with a shared image in spite of the added expense. Things like:

  • The OS is more familiar to the user
  • Better peripheral support
  • The same skill set can manage traditional desktops and virtual desktops
  • Isolation

Brian wasn't terribly impressed with HyperCache because of the shared-only issue, which I think is less of an issue that in would've been in 2009. I think that the VDI experience is better now, which can, in certain situations, outweigh the cost difference. This isn't about comparing VDI to TS. I'm assuming if you've chosen VDI that you've already done that. So, if you've already chosen VDI and are now comparing solutions, I think vWorkspace with HyperCache is worth a good, hard look.

Brian and I just had an IM conversation that I really wish we could've had on our podcast, but I'll just put the transcript here:

Brian: But HyperCache doesn't do anything you couldn't have done anyway
Gabe: How can you do that now for no additional cost?
Brian: They just do it with software only instead of Fusion-IO or SSD. So it makes it cheaper, sure, but not game changing
Gabe: Yeah, but this is with a free hypervisor, and the same old vWorkspace, so it's a TON cheaper
Brian: Well, yeah if you already have this, sure. I'm not saying it's bad
Gabe: Or if you're starting VDI from scratch, which most people are
Brian: It's just not a game changer. Just post this chat as the article

That's the actual conversation we had! So I guess it appears that Brian is underwhelmed, and that I'm of the opinion that HyperCache is a cool feature that, while sort of incomplete, really enhances vWorkspace. I think that, if and when a version comes out that supports any type of VHD, it could be an amazing, game changing solution. Yes, you can do better by purchasing a "real" storage solution, but I challenge you to find something like this for no additional charge.

What do you think? Game changing addition, non-starter, or not quite enough?

Join the conversation

10 comments

Send me notifications when other members comment.

Please create a username to comment.

Now, if Quest could tie it in with the block based dedupe baked right into server 8 then they could kill two birds with one stone... don't cache the induvidual VHD's... just cache the top 800mb of master blocks on a given storage volume.


Who's to say that's not already the plan?


Now that would be a killer feature - and make a basic vWorkspace license a guarenteed buy for SMB VDI deployments - who'd want to tie themselves into Citrix when they could do much the same for a lower cost with Quest + MS?


Note that solutions like Atlantis iLO  (software) and Nimble (hardware) do the same kind of thing (memory or SSD based caching of hot blocks, serialisation of writes to slow storage via a buffer), but combining that functionality at a low per seat cost with a fully capable yet "free" hypervisor that has block level dedupe baked in... now thats appealing.


Of course - if anyway wants a laugh - you can always do what I did a couple of months ago... setup an opensolaris VM with 64GB of ram, and create a ZFS ramdrive (with dedupe enabled of course!), share the volume with NFS, and stick some VMDK's on it.... Now that was cool (and free).


Cancel

@all Remember this feature isn't just for VDI! It can be used for RDSh also... Rapid provisioning (and caching) for RDS? Thats unique and very very cool!


Some might argue that if you need 1-1 images - just use a dedicated PC/laptop! ;)


Also we know from BriForum last year Quest are working on user installed apps which should make non-persistent desktops very appealing (if done right)...


We're now using this feature for a 1000+ VDI images - with anything from 70-80% of reads coming from cache. Instant wins - faster boots/provisioning times and faster login times...


Cancel

While I can't speak to the roadmap of vWorkspace in a place like this (our lawyers would take my keyboard away) ,I can say that this is HyperCache 1.0 and we are already very happy with the results and we are seriously investigating making it even more valuable to any vWorkspace customer, including some of possibilities that were suggested on this page.


Having said that, one of the arguments in this article is that HyperCache might be of limited use because it requires non-persistent desktops and that is supposedly very uncommon. When we were considering HyperCache for inclusion in vWorkspace 7.5 we did a decent amount of market research and this did NOT show that the bulk of all people that deploy VDI use persistent desktops. In fact it was more of an even split. Does this mean that all these people are stupid? I would beg to differ. Gabe already mentioned some arguments why people would choose non-persistent VDI and here is a list that we gathered during our market research:


•         They know their apps will work on a desktop OS


•         All their helpdesk staff is trained on a desktop OS


•         System Management procedures are built for a desktop OS


•         App vendor support does not exist for TS in the way that it does for a desktop OS


•         Peripherals work MUCH better in VDI


Some might say well all of this can be easily dealt with by having more knowledge about SBC and choosing that route. Even if that were to be true, customers would not agree because not all our customers are SBC experts. They are DESKTOP experts. Did they allow people to install apps when they were running fat clients (client/server)? No! So why would they want to go there in VDI? This is real and what we hear from our customers in real life. Might not be bleeding edge but it is reality. I call it the difference between the theoretical desktop <insert competitor here> and the desktop in practice. Sure we have some bleeding edge things in the works for next versions of vWorkspace but Quest also provides solutions for the real-life desktop TODAY.


While I am on my soapbox, I would like to make another point I want to make with regards to TS. I hope readers know that Quest vWorkspace was the first product to truly embrace the blended delivery model and offer TS and VDI in one product. Over the years we have brought them closer and closer together. 7.5 does this even more. Think of it: HyperCache and HyperDeploy work just as well for TS as they do for VDI. There is no limitation there. You can provision and boot 150 terminal server in less than 10 minutes on commodity hardware without jumping through hoops and creating separate provisioning infrastructures. TS or VDI is nothing more than a runtime for the APP (and the app is the only thing that matters in the end). One (TS) is a cheap, inflexible runtime and the other (VDI) is a more expensive, flexible runtime. What to run where? Only the customer knows (best). Of course we can help with our VDI assessment tool and Quest ChangeBASE but in the end it is still up to the customer.We just make sure that the customer can provide the best (v)workspace for right user at the lowest cost.


I'd better get of my soapbox now...


Cancel

Some interesting points Michael!


I don't believe Gabe is criticising vWorkspace for a deficiancy in helping deployments in either TS or VDI enviornments, but rather the fact that by its nature the benefits of hypercache feature in VDI use cases only applies in situations where machines all boot from a common base disk image (which could have a persistantant desktop state layered on top of it or otherwise).


As an example, the real use case I have at the moment involves finding an option to provide VDI to a number of software developers, all of whom have thier own (often major) tweaks and personalisation requirements, and who require complete local admin control over thier systems. If I were to leverage Linked Clones or Differncing disks to provision from a common master disk and thus take advantage of features like Hypercache (or CBRC on vSphere) then the initial space savings and perfromance benefits would rapidly be overshadowed by the size of the delta's spawned by all those deviations from the norm.


I feel hypercache is a great feature, that has beaten VMware Views similiar CBRC feature to market (ps - CBRC is there in View/vSphere 5, its just not supported and needs to be hacked to enable!) - and I look forward to seeing how Quest expand these features to further synergise with some of the cool stuff built into Hyper-V3 / Windows Server8.


PS - I now feel like a tech-traitor for using words like "synergise". Sorry!


Cancel

@Phil Dalbeck


If you deploy/provision a desktop pool with hypercache/deploy... you can then persistently assign a user to that desktop...


VMs are not then re-provisioned once persistently assigned a desktop but I think hypercache will be used for reboots still? ;) - @Michael is that correct?


Cancel

@Daniel Bolton, assuming you mean me, Michel, yes that is correct :-)


Cancel

@Michel sorry being lazy from the post above C&P ;)


Cancel

Sorry for the typo Michel.


That may be the case Daniel, but isnt really what I'm getting at :)


The concern with linked-clones / differencing disk based VDI desktops is that Delta VHD's start small (for each newly provisioned child of the master VHD) but then grow hugely with ongoing use. I should be clear that this is failure of current Delta based virtual disks rather than any Quest tool - but the point remains that unless your using them, Hypercache type tools (that only cache blocks from a specific VHD)  are innefective.


The for VDI then really becomes:-


1) Deploy from differencing disks, use Hypercache, boot time for all newly deployed VM's are fast as all share a single master VHD with all the windows guff and common apps on it - and thus all boot nice and fast.


The problem here is that unless you are using a layering tool and limit customisability (and persistance) of that VM, then the differencing disks with all the changes can get big and unwieldy over time.


2) Utilise seperate VHD's for every persistant desktop. Changes made will never increase the VHD beyond its original provisioned total size, and users can customise the hell out of thier VM.


Problem here is that you end up with a seperate, full sized VHD for every VM - systems like hypercache then need to cache each one seperately to be effective, which isnt realistic.


The underlying problem is the age old profile and app layering problem in windows. No layering and user profile migration tool bolton I've ever tried has managed to cope with successfully carrying all the tweaks and underlying changes a software developer makes to thier system within a week of a base OS install! I agree this is a niche within a niche - but it highlights the point gabes getting at - that is Hypercache is not useful in all cases just yet.


Once it supports dedupe (i.e. can hold one copy of a block shared across multiple VHD's, or can tie into an underlying dedupe system like the one MS are pushing with Win8), it'll be unbelievably useful. This initial release is very much a step in the right direction already though!


Cancel

Something to keep in mind when looking at using SSD or IO Accelerator cards to address IO bottlenecks is that they arennot inexpensive.  On HP's DL360 configuration 200GB SSD range from $3400 and 320GB IO accelerator starts at $8000.


The idea with HCC (hyper-v catalyst components) is to make it easy and inexpensive by using commodity hardware and having to know little or nothing about virtualization, hypervisors, VLANs, scripting...to deploy virtual desktops or session hosts.  Literally one just sets up one or more Windows Servers running Hyper-V nodes running Hyper-V Server, Core or full blown Windows Server w/ Hyper-V role, point vWorkspace at the servers and let it rip.  No customizing the VHD into some special format, no special network settings, nothing to configure whatsoever on Hyper-V.  VWorkspace takes care of the instantaneous replication of the  parent VHD  to all of the Hyper-V servers, optimizes the IO (making local SAS disks sufficient for non-persistent workloads), boots and joins then machines to AD in seconds without any reboots.


It's also important to know that HyperCache is just a service that can be centrally configured, adjusted, disabled, enabled without any service interruption as the VHDs don't know anything about HyperCache.  If it's disabled, one just looses the IO benefit, the VHDs are not interrupted in any


way.  Compare to other solutions and this is uber simple to deploy and maintain and is wicked fast.


Add in the ability to spin up 10s or hundreds of Session Hosts/Terminal Servers in a couple of minutes...


It's not just about IO, but about making things as inexpensive as possible, while also making things so simple any desktop admin can deploy and manage and also making the provisioning, de-provisioning, updating and use of the desktops much, much, much faster.


Point, click, let it rip! Or if you're a powershell type, script away...


Cancel

@Patrick Good point.  If you accept that one of the major hurdles to any VDI project is cost, specifically the cost of storage, being able to achieve IO performance comparible to Fusion IO or enterprise-quality SSD with standard SATA hardware is pretty game changing.  With HyperCache there is no need to upgrade your storage infrastructure to support VDI.  


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close