What is app virtualization? Part 3: Layering

In today's article, we're going to cover the technology called "layering."

This article is Part 3 in a series about app virtualization. The other parts are:

In today’s article, we’re going to cover the technology called “layering."

Layering is a popular term that we’ve been writing about for years. Gabe first wrote about it over 6 years ago, though a lot has changed since none of the three vendors he mentioned in his 2009 article (MokaFive, Wanova, and RingCube) exist anymore. That said, the term “layering” has gotten trendy, (like app virtualization), and today there are dozens of vendors who claim to have layering solutions. To make matters worse, there’s no single broadly-accepted definition of layering, meaning we can’t really point to one vendor and say, “You don’t have real layering!” because in their minds they probably believe they do. (Of course whether or not a vendor has “real” layering or not doesn’t really matter—instead all that matters is you buy a product that does what you need it to do.)

Getting back onto the point of this article, I’d like to talk about layering (as I see it) and how it fits into the overall application virtualization landscape.

The basics of layering

The fundamental technologies that make up layering solutions are the exact same technologies discussed in Part 2 of this series (about virtual “locally installed” apps). At the most basic level, layering solutions have agents that live deep inside Windows (or perhaps in the hypervisor) that trick Windows into thinking that files and registry keys are all installed into their normal locations when in fact those files and keys are coming from different places. So again, this is just like virtual “locally installed” apps.

The difference with layering is how these virtual apps are packaged and bundled.

Back in Part 2, I talked about how locally installed app virtualization tools let you package up Windows applications into isolated bundles. The key with that is that all your apps are still separate. e.g. Office is isolated from your billing app which is isolated from your sales app which is isolated from your CRM app… An enterprise could have literally hundreds (or thousands) of separate app packages for all their applications.

The idea with layering is that instead of managing and packaging all of your apps separately, you group related apps into the same bundle, and then you make those bundles of related apps available to users on demand.

For example, if you are an enterprise with 5,000 users running Windows 7, chances are the “base” Windows 7 image for all 5,000 users is going to be identical.

Then on top of that, you might have a core group of “enterprise” apps that are also the same for all users. (That might include Microsoft Office, your time sheet and expense app, Adobe Acrobat, Dropbox, and the other apps that every single employee in the company uses.)

Then as you divide your company into smaller groups, each group of users is going to have their own apps. For example, on top of the enterprise-wide apps that everyone uses, you might have a different set of core apps for US-based workers and European workers. Then you might further have sets of apps per country, then per department, then per project, etc.

Figuring out which apps are used by which groups of users is nothing new, and it has nothing to do with app virtualization. Even in your company today, a typical user might have 20 applications—10 are company-wide, another 3 are based on their location, they have another 5 based on the department they’re in, and then 2 more based on the specific projects they’re working on.

Using app virtualization (whether the remote apps I discussed in Part 1 or the locally installed virtual apps I discussed in Part 2) is a great way to package the apps and make sure they can all be easily deployed in whatever combination you need to your users. Layering is just a way to manage all these apps in different bundles.

Layering versus locally installed virtual apps

At this point you might be thinking, “Ok, so what’s the difference between layering and locally installed virtual apps?” As I mentioned already, it could be argued that there really isn’t too much of a difference.

Remember from Part 2, we talked about how locally installed app virtualization can take all the different files and registry keys that an application needs, and instead of installing them into all their different native locations on the hard drive, it actually installs them all into a single location and then tricks Windows into thinking the files are actually in the other locations where they’re supposed to be.

In the example, I talked about that “other” location as some folder on the C: drive, like a file that need to be installed into c\windows\important.dll really being installed to c:\AppHero\SomeApplication\important.dll. It’s still on the C: drive (because even the virtual app was “installed”)—it’s just not in the original location.

Okay, so you know when you put a USB stick in a computer and it appears in Windows as a new drive? Imagine that if you put a USB stick in a computer which showed up as the E: drive and then configured your app virtualization product to use the E: drive as its secret location, what you end up doing is getting all the files and all the registry keys of your application installed onto the USB stick instead of to the computer’s hard drive.

Again, Windows doesn’t know this is happening because the app virtualization product is tricking it into thinking the files and registry keys live in their proper locations on the C: drive, but you can see how you could use app virtualization to actually install all of your apps onto a USB stick. (So the c:\windows\important.dll is actually installed to e:\AppHero\important.dll.

You can then take that a step further. If your app virtualization software lets all the apps use the USB stick as their storage location, you can envision that you should be able to remove that USB stick from your computer, walk up to a brand-new computer with no apps installed, insert the USB stick with all the app packages on it, and then the apps from the USB stick should show up on the new computer and work like normal apps, right? (Assuming you installed the app virtualization software agent on the new computer.)

This makes sense, right? With app virtualization, all the files and registry keys can be stored in any arbitrary location, that arbitrary location could be the E: drive, and the app virtualization software runs interference between Windows and the storage system so Windows thinks that everything is fine even though your app’s files and registry keys are coming from some alternate location.

Now take that a step further. In today’s world of virtual hardware and hypervisors, hard drives can be virtualized into a single file. (Microsoft uses .VHD files, VMware uses .VMDK files, etc.) On a Windows computer, if you see a .VHD file, that’s a virtual hard disk file, and you can double click it and it gets opened up and looks just like another hard drive. Literally you double click the .VHD file and all of the sudden an E: drive appears which looks to Windows like a second hard drive but really all the files on it are coming from inside that .VHD file.

So imagine combining the fact that a Windows computer can “mount” a .VHD or .VMDK file like a hard drive with the app virtualization products that let you install virtual app packages into a secret location that look like natively-installed apps to Windows.

Now take this a step further. Instead of having a .VHD file containing virtual apps on a computer’s hard drive, what if you put it on a network share? Then all the users in an office would have access to it (as their P: drive or whatever), and it would contain all the virtual apps that the users need, and the app virtualization software running on each computer would make it look like those application were all installed locally.

This would be a great solution, because as an IT admin you would only have to maintain one single copy of your application packages on the network, and all the users would have the benefits of having their applications “installed” (because the app virtualization software on the users’ computers makes it look like those apps are installed), and since you’re using app virtualization none of the apps are conflicting with each other.

Taking this another step further, what if you made different .VHD files for each related group of applications? You might have one for the basic enterprise-wide apps that everyone in the company uses, a second for country-specific apps, a few more for department-specific apps, and even more for project-specific apps. Then each user could connect to their individual combination of .VHD files for the apps that they need, and to that user it would look like they have every single app installed locally even though the apps are coming from .VHD files containing virtual apps across the network.

Putting it all together

This is essentially what layering is. It’s called “layering” because people think about it logically like each bundle of apps is a separate layer. You have a base layer with just the pure Windows operating system, then a layer on top of that with the enterprise-wide apps, then a layer for country-specific apps, then a layer for department apps, then a layer for project-specific apps, etc.

In reality these aren’t actually layers that are stacked on top of each other—really they’re just modular bundles of related applications—but the layering metaphor makes it easy to think about.

This is essentially how layering works. If you’re using virtual desktops (like VDI) running on a hypervisor like Microsoft Hyper-V or VMware vSphere, then in that case the desktops are not connecting to disk image files across the network since the hypervisor itself is able to bundle multiple VHD or VMDK files together to make a single VM disk image, but the concept is the same.

You can also use these “layers” to instantly add or remove groups of apps for users (just connect or disconnect them from the proper layer), and keeping everything in separate layers makes it easy to update the apps in a particular layer without breaking everything else.

When it comes to layering products on the market, not every vendor uses VHD or VMDK files. (Some do, and others use their own formats.) But the concepts are the same across the board. The fundamental technologies that make layering a thing are the exact same technologies which power the locally installed app virtualization I discussed in Part 2. But when you bundle groups of related apps together and then let desktops connect to multiple groups of apps at the same time to build up their complete and custom Windows environment, you now have what most people refer to as “layering."

Join the conversation

5 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Could someone provide a list of vendors that offer layering solutions?  Also, do any of the layering solutions work for deploying images/apps to both VDI and physical desktops?


Cancel

What problem are we really trying to solve with layering??


Good story, no doubt that Application Layering is now mainstream. Brian, you describe layering as multiple applications in each layer but what problem are we REALLY trying to solve with layering?


Application layering was really touted as solving the industry problem of multiple base image management.


With new application delivery technologies it should really be the goal of any Windows organization to get to single image management. With a capable layering solution, and then technologies that complement layering like Microsoft App-V, it really is now possible to get to a single base image.


It's true that many layering solutions push that you to get as many applications into a layer as possible for a department or group of users. There are different reasons for this (covered in my full blog link below). But what this does is shift the industry problem of multiple base image management to multiple application layer management!


Our solution (FlexApp) and methodology is different. While you CAN do multi-app layers with FlexApp, our performance, our truly “Portable” application layers, and our Micro-Isolation technology, allows you to keep each application into a separate layer. The advantages of this is that you no longer have to manage big layers per group and you can manage/update each application layer separately without modifying a layer full of application.  


Not to be too commercial but with a single application per layer strategy first, we solving the industry problem, not shifting it to another bucket. Here is the link to my full blog on why Single Application Layers are Best  blog.liquidwarelabs.com/.../single-application-layer-methodology-is-best


Cancel

We jumped on the layering bandwagon a couple years ago and are looking to jump off soon. It didn't work well enough to not cause more headaches than it solved and the man hours spend managing 200+ app layers certainly didn't save us any time compared to 2 gold images. but even gold images are unnecessary if you get enough storage to do persistent VM's and patch with whatever tool you use to manage and patch your laptops/desktops. I don't see any reason for any unique-to-VDI tools to manage Windows, but maybe someone can enlighten me?


Cancel

@notorious. I am taking a keen interest in app layering. Interested to hear about the headaches.


We currently have about 65-70% of our apps virtualised with app-V. However, being a lean IT shop this creates some challenges. The time it takes to sequence virtual apps and the skillset required to create these virtual packages is becoming burdensome on a very lean IT team. I am looking at reducing the complexity of our environment and improve the turnaround time of delivering apps to our users.


Security is also another nightmare knocking at my door. Before I was only concerned with MS patch Tuesday, but now I got a list of over 300 apps I am having to patch because security are now interested in vulnerabilities and end of life of the apps we are running (Wireshark, iTunes, Adobe Air, putty, Adobe Acrobat, etc... to name just a few). A lot of these apps or components of an app are virtualised, thus inviting more pressure on the team to update these virtualised packages and turning them around.


I am looking to ditch App-V and go to layering and only using App-V for compatibility issues.


Cancel
notorius , interesting that you have concluded that layers are not saving you time.
I had always thought that the initial setup sounds great but the ongoing patching of Groups of apps would be an issue.
What is needed is to completely separate each app in to its own delivery container so that you only need to update one app at a time.
As Jason says, we've just shifted the Image management to AppLayer management.

Simon, Not wanting to put a sales Pitch, but there are other options other than App-v for virtualisation of individual apps. Many people have commented on the time taken to package App-v apps. In a previous job I was part of a project to look in to alternatives to App-v (which we were using at the time) and from that deployed Numecent's Cloudpaging (was Application Jukebox). Since then I have joined Numecent so some would say my opinion is Bias, but I think you should look in to it as an alternative.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close