MokaFive plans to release "layering" as its own product. Nice!

We've talked a lot about layering over the past few years. (If you're not familiar with the concept, Gabe wrote a great intro article in 2009.)

We've talked a lot about layering over the past few years. (If you're not familiar with the concept, Gabe wrote a great intro article in 2009.) We've also seen a fair amount of products that offer some sort of layering capability, including Atlantis, MokaFive, Unidesk, Wanova... probably others I'm forgetting. And now that user virtualization vendors like AppSense and RES are talking about supporting user-installed apps, they themselves are getting pretty close to offering full-fledged laying products.

And while the architecture of Microsoft Windows means we'll probably never get that perfect "nirvana" layering product, there are a lot of scenarios where layering really makes sense.

So ok, we've got products that do layering, and we have scenarios where layering makes sense. The problem is that these two things don't always align. For instance, if you already have a desktop virtualization environment built and running, are you going to rip out your current product just to add one with a layering capability?

Meanwhile, in Redwood City...

Today's article is about MokaFive. Those familiar with MokaFive know that they have a client-based virtual desktop management solution (which supports both Type 2 and bare metal hypervisors). They handle the imaging, provisioning, management, backup, security, and deprovisioning of client VMs an disks.

As part of their solution, they developed a layering capability. First released in 2009, it's now pretty advanced. The main problem, frankly, is that you have to buy MokaFive's suite to get it. But if you want datacenter-based desktops, it doesn't really work. If you already have Citrix XenDesktop or VMware View, are you going to then buy MokaFive and try and mix it in? Not only are you spending a lot of extra money, you'll also end up with two completely different management systems for your disks, your policies, your users, etc.

So to that end, MokaFive has decided to release their layering capability as a standalone product. This is something they'll provide in an OEM capacity for other vendors (Quest Software, for example) as well as (hopefully) selling it directly to end user customers.

MokaFive "Virtual Layers"

Since it's not a real product yet, MokaFive doesn't really have a name for this layering capability. Right now they're sort of calling it MokaFive Virtual Layers or "Project Filo," (which is kinda like "file" and kinda like "phyllo" dough. You know...the one with the layers. Get it?)

The MokaFive virtual layer thing does NOT have it's own console. Really it's nothing more than their layering driver which can take a bunch of volumes (OS, apps, and user, for instance) and merge them together to provide the seamless single "image" to Windows while still allowing each layer to be backed up or updated independently.

The great thing about their product is that it's OS-, hardware-, and hypervisor-agnostic. They want to build a secure, managed, personalized container that can work anywhere. As for how you deliver, manage, update, move around, and secure the layers, well, that's up to you! (That's kind of the whole point of this product. If you want a solution that has that too, then you should buy the real MokaFive product. :)

How it works

Fundamentally MokaFive's Virtual Layers is just a file system driver that simply combines three separate disk volumes into a single logical C: drive that Windows sees. I was able to play with a prototype of this capability, and I literally built a Win7 VM with three disks: system.vhd, apps.vhd, and user.vhd. But once I had Windows installed with the MokaFive driver, all I saw in Windows explorer was a C: drive.

Then I could install whatever apps I wanted to. I could save data. I could do anything. The whole time the MokaFive driver was splitting up everything I was doing between the three volumes as it determined it should.

Then, playing the role of "admin," I created a second VM and pointed it to just the system.vhd. I booted it up, made some system-level changes, and shut it down. Then when I booted up my first VM again (with the three volumes, including the newly-patched system.vhd) I now had a patched OS complete with my own user apps and data from before the admin touched it.


Doing all of this with their prototype was a very manual process requiring a lot of editing the configuration of the VMs and pointing them to different VHDs and stuff. But really that's kind of the point. If I were using the MokaFive layering in a real environment I would hook that into my existing system--whatever that happens to be. (SCVMM or vWorkspace or XenCenter or vCenter or a bucketload of PowerShell scripts or...)

As I said, fundamentally there's nothing in MokaFive's layering solution that precludes any specific architecture. Really any hypervisor, local storage, remote, whatever, should work. In fact fundamentally it should work without a hypervisor using physical disks and/or mounting VHD files directly.

This architecture also means that in addition to working on any hyervisor, you could also use just about any disk architecture you wanted. (Maybe you put the system volume locally on an SSD drive on each VDI host while you map the app and user layers to shared storage.)

Updating the MokaFive Virtual Layers is easy too. There's no "tool" or anything to capture changes. You don't need to put the layer in a special mode or anything. Really all you do it just boot the system layer in a read/write way and just make your changes. Then push that out (or make it available or whatever you're doing) to your users.


In the most basic sense, the concept of layering provides a few advantages:

  • Smaller capacity, because you can get the benefits of persistent "user-owned" disks while leveraging the efficiencies of sharing the system disk.
  • Easier updates, because instead of running some process that updates the system disks of each of your users, you can just update the shared system disks and the users sort of "instantly" get that when they next boot or whatever.
  • Easy backup, since all the important user data is in the user layer, you just have to backup that one single file. (Or if a user moves to a new machine, it's easy to deliver that one file which will include everything he or she needs.)

Layering drawbacks

There are several different ways that layering can be done (as we know since there are a lot of companies doing layering). Those who are doing it within Windows, like MokaFive, feel they have an advantage over those who do it from the outside at the storage level.

When approached from the storage side, layering is typically done at the disk block-level. This means that there's a single master shared disk image that lots of VMs use (meaning the master is read-only). Then when individual VMs need to save their "writes," they do so by writing them into another disk image that only contains their own personal changes. So each VM's system disk is really two disk images: the shared read-only master and the personal read/write "delta."

So far, so good...until an admin wants to update the master. The problem is that since this is all block level stuff, when you make any change to the master, you invalidate all of the "delta" layers for each VM--it's a very destructive thing. The solution is to try to redirect as many of the settings and file changes you can to a third volume (something akin to a "user data disk"), but unless you have some kind of additional third party software to do this, you're never going to catch everything.

This is the main reason why I never really liked thin provisioning or linked clones or any of the other storage-based disk sharing solutions, and it's one of the main reasons that the vast majority of VDI implementations today simply use 1-to-1 persistent disk images instead of these sharing techniques.

MokaFive, on the other hand (along with other vendors) is doing it right since they're doing layering inside Windows at the file level. Of course the file level is not 100% perfect either. I mean what happens if you apply an update to the system layer which is fundamentally incompatible with an application the user has installed into their own app layer? ?? ????

This is where MokaFive claims that their two years of production support of layering comes in. They claim to have lots of experience with these "bad apps" and "bad updates" that their policy engine (which determines which files get written to which layer) can work around these challenges. MokaFive's CTO John Whaley also explained that there are things customers can do to minimize the chance that you'll have a conflict between layers. Treating the system layer like a gold master, for example, and only allowing IT admins to make changes instead of general users, is a good start. (This means that you'd turn off auto updates and stuff and control the releases and updates to system layer yourself.)

Layering isn't app virtualization, and it's not user virtualization.

Ok, so historically we've had "app virtualization" which isolates the changes an app makes as it's installed and puts them into a separate package (or "layer"). Then we had user virtualization which isolates the changes that a user makes throughout his or her session and redirects them to a certain location (or "layer"). And now we have these layering products that attempt to do that with all changes.

So app virtualization, user virtualization, and layering: how do these all mix? What's the difference?

The user virtualization vendors have been making a pretty big deal over the past few months about the fact that they're not application virtualization nor do their products replace app virtualization. (In fact the two work hand-in-hand.) The same can be said about MokaFive's virtual layers when compared to both app virtualization and user virtualization. They all work hand-in-hand.

With respect to app virtualization, MokaFive virtual layers is specifically recognizes XenApp streamed apps, App-V, ThinApp, and SVS packages. So when one of them is streamed, written to, or installed into Windows, the MokaFive layering engine realizes that this app came from a virtual package and ensures that everything that app writes is written into the app layer (since there's no need to pollute the user layer with a shared app that can be accessed again). This means that "rejuvenation" process can be simple and won't ruin any apps, and that back ups only have to be done at the user layer and really will include everything.

The same is true with user virtualization. At first you might wonder why you need a user virtualization product with MokaFive's layering. While there certainly is some overlap between the two products around how they isolate the user changes, the two products are fundamentally different. The user virtualization products, for instance, have full consoles that also allow you to control which settings are enforced on users, which they can change, which apps and group options users get, etc. The user virtualization products are also great for mixing-and-matching Windows XP, Windows 7, Session Host, 32- and 64-bit Windows, etc.

Is there an industry need for independent layering?

I guess we'll find out!

In MokaFive's view, there are actually two challenges to doing the whole desktop virtualization layering thing. One is the actual mechanical layering, and the other is doing the workflow to manage it all and get the layers where they need to be. And so this is where MokaFive comes in. They'll handle the layering. They'll do the rebootless domain join. They'll ensure that any user-installed apps persist between reboots. They have the extensive XML-based policy specification that defines how the layers work and are broken up.

But how do you create, provision, & deliver the three disks? That's where the workflow from somewhere else kicks in. (And of course, MokaFive would be happy to sell you their other product for that. But if you just want the layers on their own, now you can have them.)

When can we have it?

MokaFive virtual layers is available today for OEMs. A few of their larger customers are prototyping it. But so far they're not yet 100% sure how this thing will be released. Should they release this capability directly to customers? Or only via OEMs? Or...??

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I can see how layering on its own would be very useful for individual use (such as a home PC).  You could keep the base OS  and essential apps clean while installing all sorts of beta apps and utilities.  However, without a management "layer", it has doubtful value for organisations.

My view on MokaFive full product is that the solution looks very good, but while it requires a type 2 hypervisor the killer is going to be licensing.  Specifically, Microsoft licensing.  Either you have to have SA (to get MDOP or whatever they call it now) or you pay twice - once for the host O/S and again for the guest.  Until the client hypervisor is proven (as far as I know it hasn't been released yet) this will stop us using it.  It's the reason we are looking at Wanova.


@Zojo  MokaFive BareMetal is currently in beta and being tested by many of our current customers. We plan to GA BareMetal in May.

On Type2, you are right. For corporate owned assets, the best scenario is to have SA. For personal or contractor owned assets - you just need to buy one copy of Windows.


"You should not modify the parent of a differencing VHD. If the parent VHD is changed or replaced by a different VHD (even if it has the same file name), the block structure between the parent and differencing VHD will no longer match and the differencing VHD will be corrupted."

Citrix should have learned from this that File-Based intelligence will always rule when comparing to Block-based.

They found out with XenServer IntelliCache & NFS or thin provisioned local storage but they also should have found out with VHD File-based Differencing.

I would still argue with you Brian though, that this layering technology competes directly with User Virtualization. Where VHD File-based differencing/layering is considered in my books as "User Virtualization light".

If you have a User Virtualization solution you will most likely not need layering because there won't be any VHD chaining due to the user layer abstraction. However, the opposite is not true. If I were to purchase a layering technology, I may be required to purchase a user virtualization technology as well.

This is another layer of complexity. Either Citrix should develop a file-based VHD solution or they should purchase AppSense. Any of those options would make all of this a mute point.

But I doubt it will be this simple.


@icelus what about user installed apps or departmental apps. User personalization tools cannot support that.

To reduce complexity,  we decided to keep layers very lightweight so that it integrates easily with your existing VDI installation.  

Between the AD console,  a software distribution console, and your VDI tools console - there are already enough consoles that an admin has to manage. With MokaFive Layers, we did not want to add more complexity with yet another one. Why add your users, targets, images etc in one more console.

MokaFive Layers is truely lightweight - requiring no additional servers, storage or consoles. Layers has a very small footprint and can install in any VM container will immediately offer easier updates, smaller storage, and better performance.

For integration - we provide  runbooks that will automate the workflows that Brian mentioned.

What do you think of this "no console" approach to Layers?



AppSense User Virtualization accomplishes this with Application Manager.

Make no mistake, I am a fan of MokaFive. But I just wanted to bring out the fact that you are breaking ground in the user persona space with this product, and will be competing with the other vendors in this space.

The big areas of streamlining IT service management is:

- Server/Client Hardware abstraction (single instance management using hypervisors)

- Server/Client OS abstraction (single instance management using OS streaming/OS provisioning)

- Server/Client Application abstraction (single instance management using App streaming/App provisioning)

- User Personalization abstraction (single instance management using layering/instance separation)

For customers trying to cope with not having a user virtualization/layering technology would be using a private/dedicated desktop delivery model and if it comes down to that they should go back to the drawing board to see why they are doing desktop virtualization in the first place.

To me MokaFive is now a competing vendor in the user virtualization space, so bring on the management to compete because that's all that matters. The management.

I don't mean to tear the announcement apart, it's actually exciting to hear.


Although there is some overlap, user virtualization is quite different from layers.  User virtualization allows you to track and manage user profiles and preferences and enforce policies on an application-by-application basis.  Layers simply captures everything the user does into user and application layers that can be managed independently, and allows you to update the base image independently of the other layers.  Although in some cases you may choose one in lieu of the other, there are other cases you want to use both, for example using layers to manage the image and a user virtualization product to share user profiles and preferences across different machines.

It's a similar story for app virtualization.  From a high level layering and app virtualization seem similar and share a few of the same benefits, but they are actually pretty different.  One is about isolation and the other is about composition.

Layered management primarily gives you these three benefits:

- Reduced storage costs and improved performance vs persistent images

- Easier image management because you only have to maintain one image

- Quick recovery for users through rejuvenation without losing user data

I just recorded a short (2 minute) screen capture video that demonstrates MokaFive Layers running on Hyper-V, running through the basic flows that Brian mentioned:


If this was sold to customers directly, as long as it provided some canned (yet customizable) management tools for integration with the leading virtualization platforms, I can see this being a great solution.  I can see it helping bridge the gap when deciding whether data center based VDI should be provided dynamic/pooled versus persistent.  Pooled can reduce storage requirements and simplify image management, but can make security investigations and forensics more complicated.  Persistent can provide users some additional customization flexibility and address investigation and forensics requirements, but can also drive up storage requirements for the 1:1 user:v-disk ratio.  It sounds like M5 layering could provide a hybrid model where the system partition provides the benefits of pooled, with a patch once, deploy everywhere paradigm, yet provide the benefits of persistent also where anything the user does it routed to the apps and data partitions.  This latter data would be available for auditing and forensics, and would require much less storage than retaining the entire image.



I agree with you - that ultimately it is all about management. Virtualization of any type is simply a means to an end.

With our complete product, MokaFive Suite we have deeply focused on making management very easy - and we will continue to do so.Our forte is making the corporate container secure and managed independent of the type of hardware, OS, or  hypervisor - you pick your poison.

With MokaFive Layers we chose to be independent of the delivery model too. It will absolutely still have management hooks - albeit not exposed through a console. Since it is a point solution - we want it to be integrated with the primary VDI console of your choice.


Today, we have chosen to initially expose this as OEM only solution to ISVs, MSPs and SIs, so that the end customer sees an integrated offering.  That said, if we see demand from end customers - we will package end cutomer management tools ready for integration.


This is very exciting stuff. Layered composition of the desktop streamlines the management of virutal desktops by an order of magnitude -- direct OPEX benefit. I am not  too concerned about the storage savings because most storage solutions come with dedup [rather painful to setup] and there are linked clones etc.. But the real deal is about the OPEX savings by having to manage one copy of windows or office [if they are in the base image] as apposed to X number of VMs.

Most of the production VDI deployments that I see [even up to 30,000 users] -- they all are using 1:1 static desktop model. In this world, the desktop management problem simply shifted from the physical device to the virtual machine -- same labor and PCLLM costs, except for some HW abstraction. This is the reason customers using VDI claim that 'there is no OPEX' savings with VDI.

We@ Virtual Bridges firmly believe that Dynamic Desktop model with Base Image [we call this Gold Master], separate app layers, and user profile layer are the best ways to deploy virtual desktops [VDI + Offline VDI + Branch VDI].

Now the question that people have is 'what is the best way to streamline the deployment of apps and user profiles" to a base image. Layers, App Virt, UEM, Workspace virtualization-- all seem to attack it from multiple angles.

Even on the layering alone, there are many approaches -- Unidesk, Mokafive, Wanova, some initial stuff from Virtual Bridges etc.. And we know so many flavors of App Virtulization.

How do we explain the solution differences, pros/cons, across all these -- so that customers can make the right product/deployment decisions..

Perhaps a GeekWeek is overdue in this category.. Over to you, Brian/Gabe. :-)




I see a strong case for OS and application layers and  would like to see the management integrated into another product via an OEM arrangement to simplify management and support (one less party to point fingers at when things go wrong).

When it comes to the user virtualisation I prefer to keep this out of a layer and go with a user virtualisation product such as AppSense to allow granular control over what is and what is not retained. Do you really want to retain everything that a user does? I can see "profile bloat" quickly turning into "user layer" bloat.