A technical explanation of why the whole “layering” / shared image thing is so difficult - Brian Madden - BrianMadden.com
Brian Madden Logo
Your independent source for desktop virtualization, consumerization, and enterprise mobility management.
Brian Madden's Blog

Past Articles

A technical explanation of why the whole “layering” / shared image thing is so difficult

Written on Nov 24 2009 15,491 views, 27 comments

New! Listen to this post in our daily podcast. iTunes Podcast Podcast RSS Feed

by Brian Madden

Yesterday’s main article on BrianMadden.com was about the disk images used for today’s VDI deployments. We specifically looked at whether the technologies that allow many users to “share” a single master image are actually ready for hard core users now, or whether today’s large VDI deployments are limited to using “personal” images where each user has his or her own image that’s saved from session-to-session.

The main point of the article (which jives with what most of the commenters wrote) was that VDI was in fact not ready for large populations of disparate users to share single master images.

But why not? That’s what we’re going to look at today:

Background on shared images versus personal images

If you’re not yet familiar with the concept of “shared images” versus “personal images,” I suggest you first read an article I wrote earlier this year about Atlantis Computing’s ILIO product. That article has a whole section called “Background info on the VDI shared master disk image concept” that really gives you the background you need. (And of course, if you haven’t done so, read yesterday’s article “Is today’s VDI really only about non-shared personal images” to catch up on the conversation so far.)

Assume we’ve solved the disk image block and file problems. Now what?

Ok, so now that you’ve read the prerequisites, let’s assume for sake of argument that you’ve totally solved the “file versus block” disk image delta problem. (Maybe you’re using Atlantis ILIO or you’re reading this at a time when another vendor’s solved the problem.) Fine. So if that problem is solved, then what’s the big deal? Isn’t that the end of the conversation?

Hardly. (I mean come on.. this is BrianMadden.com! :)

There’s still a fairly complex logistical problem. For example, Atlantis can show you a badass demo of them building a master Windows XP image based on SP2 which is then deployed to users. Then they can show a user installing iTunes which of course persists between reboots in their personalized image. (I know, I know.. Atlantis doesn’t have “images” per se, but just hang with me for sec.)

So then Atlantis can show you how an admin can crack open the master image and make a major change, like updating it from XP SP2 to SP3. Finally they’ll show you how they can boot the user’s instance which is now on SP3 while still having iTunes installed!

This is super amazing, and peoples’ first reaction is typically something like “HOLY CRAP THEY’VE DONE IT!” In fact that’s what I even wrote in the article.. (I literally wrote “"Wow. Wow. Wow. YES! This is what we need! Wow. Wow. Must breathe. Wow.")

Except there’s a huge problem here. In the XP SP2 to SP3 with iTunes example, we were lucky enough that the same install of iTunes was compatible with both SP2 and SP3. But what if it hadn’t been? What if XP SP3 changed some file that would break iTunes? Even Atlantis’ file-level and block-level intelligence is worthless here. (Sure, it’s great that it knows which files and which blocks were changed, but it doesn’t have the smarts to deal with every app out there.)

So while that’s a great demo, (and while there are many other reasons to use Atlantis, like for the massive increase in IOPS you can get which lead to more users per disk than anything else in the market), Atlantis isn’t the magic bullet that’s going to make all of your complex apps work on a single shared master image.

Enter the app virtualization vendors

At this point you think, “No problem, this is where the traditional app virtualization vendors come into play.” If we can just isolate/virtualize each app, then we in essence separate them from the underlying OS.

If only it were so simple! (Seriously, if it were so simple, we’d all be doing that with all our apps today.) But everyone knows that there are issues with app virtualization with certain apps, and that no solution out there can deal with everything.

Actually there’s a weird irony here... App virt products increase “compatibility” by sort of staying out of the way of what the apps are trying to do. So this helps broaden compatibility while hurting the chances that multiple conflicting apps will work side-by-side. In other words, there’s a sliding scale in app virt between “compability” and “isolation.” More of one equals less of the other. (This is kind of like the age-old “security” versus “convenience” balance.)

This is the problem that also dogs non-traditional app virt vendors like RingCube. (RingCube? Seriously? Hell yeah! Even though their traditional play has been about super portable VMs that reuse existing Windows client components, I see their future as one of the sort of “ultimate” app virtualization environment. But that will also come at the expense of interoperability, as they would essentially give each app its own Windows VM.)

What’s the solution?

So if conflicting updates at different layers is the problem, then what’s the solution? (If you’re waiting for a punchline here, there isn’t one. I actually want to know what the solution is!)

I’m just starting to worry that since Windows was never designed to be assembled this way, I wonder if we’re ever going to solve this problem? I mean there are just so many complexities and dependencies and incompatibilities and lines and arrows and layers... do you think that we’ll ever see an answer to this problem before we’re all running Chromium?

I guess I want to be optimistic. We still don’t know what Unidesk is up to. And certainly Atlantis has made improvements since we first discussed them. And RingCube is getting better all the time. And let’s not forget all the traditional app virtualization and user environment vendors!

But is this a problem that can be solved with technology? Or is the whole sharing a single image for “real” users just a pipe dream?

[PROGRAM NOTE: This is the week of the Thanksgiving holiday in the US, so this article is our last for the week. We’ll see you all back here on Monday November 30.]


Our Books


appdetective wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 2:26 AM Link To This Comment

I tend to agree layers is the eventual way this needs to be solved for single image mgmt. That's a challenge to the vendors. It will take time if indeed it can be made to work. Granted there are exceptions and for some, current technologies are good enough.  

Therefore in the mean time for those who really GET why do this, you need a simple way to connect and manage persistent desktops that are centrally hosted. The vendors just need to enable this reality.  

Tim Mangan wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 7:21 AM Link To This Comment

<i>The problem is the apps</i>. Tools such as app virtualization suceed by grabbing too many changes because we are all too busy/lazy to really understand what is being written and why.  Simply layering with tools will move where the bits are.  

Your 20GB brand spanking new personal disk image bloats to 200GB over time because applications write a lot of stuff that should never be kept.  We need to better understand what all that data is and build the tools to automate the separation.

For example, when we built SoftGrid (now App-V), we segmented the changes that the app makes into portions that are related to the user and portions that are related to the machine. This was done because with roaming profiles there are some portions that should not roam.   But even there, by default most all of the changes are saved to the user delta image area and layered onto the base app image wherever the user goes.  We ultimately want much of this data to be written to a place that goes away when the app ends.

Today, the tools are built (mostly) to grab everything.  Improvements in the tools (which will take some time to figure out) will allow keeping of less data, however we fundamentally need the new apps to start changing how they operate. (Gee, I seem to remember talking about this at BriForum!)  Until this happens we will be stuck in moving more around simply because do not know what to leave behind.  This is OK as a temporary strategy, but for wide scale deployment we need it to be slimmer.

Early adopters for persistant image VDI will be dealing with this the hard way.  If you are so inclined, it makes a lot of sense to be doing small deployments now so that you better understand the problems and how well the upgraded vendor tools work.  I mean - you really don't expect new tools to be 100% perfect do you?  Those armed with this knowlege will be better prepared.

Clearly to me, while vendors can (and will) produce tools that are somewhat better, what we really need is for Microsoft to take a leadership position and address the root issue of application data.  They are in the best position to "own the problem", owning the platform itself and being in the best position to reeducate the developers.  Sadly, I see no evidense that they understand the problem at all.

Eli Khnaser wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 10:59 AM Link To This Comment

We have debated this topic to death. We have over dramaticized it to death. Like most of you here, I built my career on TS / XenApp but I have to make some observations here:

1- We are talking so much about personalization, do we really allow that much personalization in TS? Actually do we really allow that much personalization with regular desktops? Aside from allowing them to change the background, don’t we even lock the screensaver to a corporate screensaver? Are we really that flexible on personalization?

2- User installed apps: really? I mean really? Do we really allow them to install apps? I have read Brian so many times “…no no no no users should not be admins on local machines….” How are they installing apps? And iTunes is now a business supported app? Don’t we just deploy a corporate standard image that is locked down to the approved applications? Sure exceptions can be made and if the CEO says we will support app x then either add it to the gold image, virtualize or find a way to make it work. No different than in TS days.

3- Saving files: how many times have we told users, save your data on the file server, not locally, if you save it locally we will not support it if it gets lost etc… so they will consume the space regardless.

VDI, after all is another form of server based computing that has its fair share of limitations, just like TS / XenApp did back in the day and we still used it. The difference is there is so much competition and so much going on in the VDI space that it is and will continue to advance 10 times faster than TS. Frankly, has TS really changed from 10 years ago? Ok some RDP and ICA changes and some “cute” features here and there. VDI will tend to scale better on the new future hardware and will evolve better.

Approach VDI as the next iteration of TS, if we over complicate it, it will never work but can VDI work today for large deployments, sure why not? You can have different gold images for the different types of users in your organization. You can have a dedicated VM for those “extreme” users and as layering technologies from Atlantis and Unidesk and others start to mature you can use those as a natural evolution or maybe Citrix, Microsoft and VMware have better solutions in the future as well.

Virtual Bridges wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 11:44 AM Link To This Comment

Thank you Eli. Well said. Some sanity to this user-installed app mania.

Please take a look at Virtual Bridges' post to yesterday's topic. I think with VERDE we have the perfect marriage of gold master and persistent user data, and we have for some time. It is time to shift the discussion from when will it be available to when will the server-oriented hypervisor vendors catch up to where a most desktop-centric virtualization vendor already is.

John Lewis wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 11:56 AM Link To This Comment

I'm with Eli on this one.  Brian's vision of VDI being used to support whatever an end user wants to throw at it is, as he says, "pie-in-the-sky".  And it should be.

Corporations (in general) do not suppport the use of their non-virtual PC assets in the method that Brian describes and so why should we expect it from VDI?  After all, most PC assets in the real. big business  world are being used by Task Workers whose set of applications are limited and defineable.  Big IT doesn't care if your iTunes is broken or you can't install a plugin to update your Facebook.  It's about running business applications.

And for the most part we can do that with a mixture of:

a) Base images pre-packaged for 5-15 Task Worker groups that have 80-90% of the functionality they need.  So, yes, you won't have just one master VDI image.  But you won't have 2000 bloated persistent personal images either.

b) Virtualized applications via SBS, streaming apps, and portable apps that handle the other 10-20% of their business need.

So if Super User has business needs that surpass the capabilities of the VDI deployment then he/she can have a traditional desktop or a persistent personal VDI.  I know that statement will be countered by the "but that increases the complexity and thus the support costs" argument.  Yet in the long run, support costs should be reduced since Super User has always been there and has always needed more support.  So the support effort that's been gained by going to VDI will still remain as a benefit.  (At least we're hoping it'll be a benefit.  Isn't that one of the reasons we'd do VDI in the first place?)

I think the quest for the Holy Grail of Utopian Oneness is over if we want to move the industry forward.  However, I do see the value in discussing the ideal One Image as a method to send the message that we need something better.

Ron Kuper wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 1:00 PM Link To This Comment

Hi Guys,

Personally, the thing that I would like to grasp is exactly why we need to have users installing their own apps.

Surely it is not a need for the majority of corporate cases (i.e. users) out there right?

Can you think of one sincere scenario?  Is it common?

Not only "task force workers" but also productive workers often need nothing but the stuff we know in advance they will need (be it Office, CAD applications or even IDE like Visual Studio).

As long as their 'products' (their documents, projects, debug binaries, etc) and their personal data (favorites, settings, etc) are kept why shouldn't they be "good to work"?

For these people do whatever scales the best and still serve the need. (TS for the most I assume)

Compliance heaven for us!

So "all work and no fun" you say dreading the thought of being a "user" in such a corporation right?

But keep in mind that we are IT. We are computer guys/girls. We are not like the majority of users out there. We are a minority of use cases, aren’t we?

For us and the like of us a model of BYOC is great! We get to have exactly the hardware we want and exactly the OS/Application stack we like and when corporate stuff need doing? No prob! Seamless windows of all the "Work" application we need straight from the datacenter to our iTunes engulfed environment!

I would much rather this than have all of my fiddling and games polluting the corporate cloud.

You say "I want all thin-clients in my environment!" – Well, reconsider.

Some users, even a tiny few or a whole department will surly need a different model of some kind.

Some will need a physical device (from which they will still have access to the applications and desktops from the cloud).

If you really push to pureness, you can give these people (and/or yourself) two virtual desktops – one personal, one corporate.

But then again –

Why should you care about managing the personal desktop?!

And why should users care about installing apps on their corporate desktop?


Ron Kuper wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 1:03 PM Link To This Comment

@Eli Khnaser - You got me there with these thoughts!

By the time I write a comment with my "non English" skills the  discussion is already sealed! :)

mpryce wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 4:09 PM Link To This Comment

The answer is always going to be a hybrid approach- trying to do the all in one approach is going to constantly create show stoppers.

Office task workers\clinicians\kiosks etc get a standard image with all the apps either sent in from outside sources or installed in the base. Link off of the base all you want and do upgrades to the golden image as required- test appropriately and then deliver.

For the not so typical single user images - who cares?? deploy them, manage them as you did when they were physical pc deployment and lay them on netapp deduped storage using pam cards- that’s what we do.

One important key to all of this succeeding is picking a thin client with no OS to patch and from a vendor that can keep up with their own development life cycle.

The thin OS from HP or Wyse for view or xendesktop is great!...but they release new functionality to the windows embedded clients first…sometime 6 months before it hits the wyse thin os.

Windows Embedded on thin clients seems to be the best choice in order to get the best functionality but now you have 2 OS’s to patch for each user…potentially- the embedded one and the VDI one.

The Wyse P-class(pcoip) gets rid of this concern, it delivers the display and isn’t reliant on creating new patches for every new media format that comes out-like HDX,TCX,EOP etc


BTW- my daily desktop is a VDI .. wyse pclass 2 x 22” monitors running 1680x1050,usb Bluetooth,ipod,1tera hard drive, office installed, apps delivered via softgrid and thinapp

Chetan Venkatesh wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Tue, Nov 24 2009 8:46 PM Link To This Comment

@Brian - Brian great article - very thought provoking.  My $.02 on the matter since Atlantis is  specifically mentioned.  This is a lengthy post – My intention is to share my perspective and learn from others who  visit this site. This is not a schill  for my  company or technology.  


I would like to provide my explanation for why I believe there is not and never will be a  silver bullet– at least not the way VDI is being architected given Window’s architecture. Lets forget about user-installed applications for now. Lets instead focus on the core issue of reconciling Operating System  updates & application updates and the idea that there is a "silver bullet"  that is able to resolve two fundamentally  incompatible sets of object code (either because of file system or registry level collisions) under all conditions – known and unknown.

The problem of Non Determinism

Lets recognize that OS and App modification or patching is fundamentally non-deterministic in nature.  In an open architecture (such as Windows), OS and Applications are iteratively fixed/updated  asynchronously and in isolation of each other. App developers have incomplete knowledge of OS changes while they update their apps and vice-versa, OS developers have incomplete or limited knowledge of how the fixes/changes they are making to an already released OS will impact already written applications (that are in also being fixed/updated in parallel).  Thus, we now have a classic open ended,  non-deterministic problem. Since computers act deterministically,  we have app and OS compatibility issues (not only when patching but even during installs) that must be resolved either through human intervention or a set of heuristics.  If you want a silver bullet to resolve this situation you by necessity to make the problem deterministic. We could do  the following to achieve determinism:

a. A closed Architecture that has a stable, lengthy period of freeze after release accompanied with exhaustively well documented published APIs and sub-sytem behaviors.

b. A small set of Application Developers that work closely in coordinated and synchronous release cycles with the closed architecture vendor to make sure that Application updates are engineered to work with OS and OS updates.

Microsoft Windows is not a closed architecture (on the contrary it’s among the most open and well documented). And therefore it has a large and diverse set of application developers who write applications asynchronously against its API and documentation that mostly, usually work.  Lets also note that for the sake of argument I’m not including the amazingly diverse sets of tools, languages and compilers that can target the Windows runtime and even implement their own over-rides to critical Windows Subsystems (through stuff like filter-drivers, Detours hooking etc).  Hopefully by now we have an appreciation for the way the system mostly usually works despite its non-deterministic characteristics.  The next line is a piece of “personal opinion” so please take it as such – the notion that vendor “X” has a silver bullet is complete nonsense. Engineering is a world of choosing the right tradeoff between benefits and side-affects to solve a given need. As engineers, we pick and choose which benefits we want versus which side-affects we can live with.

App Virtualization

I think it is simplistic to think of App Virtualization as purely an App Compat mitigation strategy in VDI or in the physical world. Enterprise IT’s interest in AppVirt  for App Compat (or just needing to run conflicting Apps side by side) is secondary. In my experience, AppVirt real value comes in solving a real problem with Application Regression testing against base OS builds. Regression testing is painful and expensive and IT practitioners adopt AppVirt to solve this issue in enterprises with lots of applications (in the hundreds or thousands).  Most applications are Dynamically linked to optimize the runtime and leverage existing OS provided components for fulfilling dependencies.  Without dynamic linking, each app would have to install its full dependency pay load (very very large installation size) and over-write existing versions of system installed DLLs with their own versions (in the process breaking already installed applications).  Dynamic linking forces all apps to share a common DLL provided by the base OS. Before App-Virt came along, Desktop Build engineers installed and tested dynamically linked apps against various base OS configurations to certify and release OS Image builds for their desktops.This is super expensive, complex and tedious (and since OS and Apps are  changing  all the time due to bug fixes, feature additions etc, it is very challenging to certify and release all the apps in an enterprise).  App-Virt is among the coolest hacks of all time because what it really gives you is the ability to statically link all dependencies to a dynamically linked application (Without needing to get the Application developer’s involvement or permission). By building a virtualized app-package with all the apps’s external dependencies into one discrete container,  a desktop build engineer can guarantee the application’s compatibility against a much broader set of OS variations (and even across versions of the OS like between XP, Vista and Win7).  The trade-off  with this approach are well known.  


In my view there are two kinds of layering approaches today– (1) Instance level layering and (2) Image (disk) level layering. Instance level Layering is about componentizing the Monolithic Windows Instance (runtime instance) while  Image level layering is about componentizing the Monolithic Windows Volume (disk or image).  Regardless of the type of approach, Layering gives us a way to start to slice what is big, fat, static and  monolithic Windows OS into discrete components that may now be managed independently.  Most  VDI architectures use instance level layering by repurposing App-Virt and Profile redirection/virt to achieve separation of Apps from each other, the underlying OS and the overlying user session. Atlantis Computing’s ILIO on the other hand does Image level Layering.  ILIO provides the ability to componentize the OS, its configurations, variations and then do the same with overlying applications, app configurations, variations etc all the way up to the user’s own settings. Once componentized, each layer can be managed and patched independently of each other.  We dynamically composite layers together at run-time to build a complete semantically coherent image (disk). This image boots and has all the properties required for it to be a persistent statefull desktop or a non persistent stateless desktop. We  inherit the non-deterministic nature of Windows when it comes to patches.  To overcome that we provide tools that allow you to detect conflicts and choose strategies to resolve them before patches are rolled out to end users and blow up because of incompatibilities.   Our approach is not mutually exclusive to instance layering and in fact works very well along with App-Virt to provide better management and patching of OS and Apps than simply app-virt alone. Those apps that are not a good fit for App-Virt (and there are many) can  simply be installed into an ILIO layer and managed as an independent component much like a virtualized App container.  ILIO does  other interesting things around Storage Optimization and IOPS performance – but that’s not relevant to the discussion here.

Is there a silver bullet? From a purely experimental  viewpoint today – Yes! and it will become feasible and practical in the future sometime around 2015. In this new approach each Application is installed and executed in its own VM with a new kind of  VDI broker (that does more than connection brokering). This new broker I call a mesh broker connects a lattice of co-operating  VMs. The mesh broker marshalls user and application state calls between VMs in the lattice. The Apps themselves would publish their Windows (seamlessly) into a blended desktop.  The desktop evolves from our current limited idea of being an OS with applications and a GUI to becoming a dynamic user session  that aggregates  enterprise and consumer services and applications.

For those interested - I spoke about this approach at BriForum 09 in Chicago during my session “Envisioning the desktop of 2015” (the presentation and video are available to on the BM site).  

Ron Kuper wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 5:13 AM Link To This Comment

@Chetan Venkatesh - I love your philosophical comment.

It made me think about how just now we open people could really appreciate MF technology. (Which is still a kicking industry in the 10s of billions $)

About the sentence in your last paragraph - "This new broker I call a mesh broker connects a lattice of co-operating  VMs. The mesh broker marshalls user and application state calls between VMs in the lattice. The Apps themselves would publish their Windows (seamlessly) into a blended desktop."

Haven't you just described a multi-silo XenApp farm?

(I mean yhea, with some good profile/environment management tool)

Dan Shappir wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 7:10 AM Link To This Comment


We already have mesh of of co-operating  VMs that marshals user and application state calls between VMs, and blends apps into a single desktop. This mesh is called Windows. The Windows operating system, like most modern operating systems, wraps apps (processes) inside a virtualization layer (virtual memory, virtual CPU, ...). The problem is that Windows built-in virtualization is based on technological concepts from the 70s, and hardware limitations from the 80s and 90s. The correct approach IMO is to bring Windows to the 00s rather than devise hacks (either inside Windows or underneath it) that try to transform it into something it isn't. Whether or not Microsoft eventually does the Right Thing remains to be seen. If they don't they risk losing the killer apps to a platform that does provide such decoupling (though obviously at a price) - the Web.

Martin Ingram wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 8:16 AM Link To This Comment

I see two substantial threads in this conversation: Why do users need UIA and is/can layering be the solution:

Customer realization of the need for UIA has felt to me like a cartoon where a thread in a sweater gets pulled and quickly the sweater completely comes apart. Many of my early discussions with customers were disappointing with people saying “why would we let users install things” there are a significant proportion of users for which that is true today and will continue to be true in the future. What I found though was that when I next hooked up with those same customers they would frequently say that they realized that they did need UIA and want to know more. I think there are a couple of reasons for this. Firstly a lot of the people that I talk to have a background in TS and have been serving users without any ability for them to install applications into the TS (not that we would/could let them anyhow). Secondly, all other platforms than TS have typically allowed some form of installation; sometimes users explicitly installing applications, in others automatic installation of ActiveX and other plugins. In componentized implementations all of these would be thrown away when the user logs off. The thought process I have seen that customers going through is that they first find a significant proportion of users who need to add additional applications and then they have realized that plugins increases that proportion drastically.

As an example think about webinar services. Most of these require an installation of at least an ActiveX component, each service is different making including all the plugins as standard difficult. Some organization will be able to mandate that only a particular webinar service is user but most will find pressure from their people who sell and anybody that buys goods or services to be able to use a wide range of webinar services. This is where the  sweater starts to really come apart and before long the acceptance that actually most users will need UIA.

The second major thread is whether layering can be a solution. There are definite challenges in terms of application compatibility in abstracting  user installed applications away from the underlying OS but we do have a long standing model through supporting backward compatibility which, combined with dynamic linking, allows applications to survive most minor updates. It is not a perfect model but it does work in most cases. In any model where we layer or isolate this becomes harder to manage because different versions of common components may have been installed for different applications and these need to be resolved for the updated components but can be addressed.

The bigger question is whether thinking about layering up individual applications makes most sense. As I look at application usage in larger organizations I do not see a layered, hierarchical world. What I see is a number of very common applications and then, beyond that, almost all structure disappears. This looks to me not to be a layered or hierarchical problem but a relational one where we need to be able to establish and represent links between users and any of the applications they need. I think we risk disappearing down a rat hole if we think that we model application usage as simple layers.

Martin Ingram (AppSense)

Ron Kuper wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 10:10 AM Link To This Comment

Hi Martin,

In the banking industry in Israel giving people Internet access from the corporate network is forbidden by government regulations. Instead we use an isolated Citrix farm and/or personal laptops.

I guess that when you do allow Internet access from your company's PCs and now want to move down the VDI path than web plugins could be an issue. But a lot of these ActiveXs are in the user space and do not even require admin rights on the machine. So I guess that with some effort these can be saved across non-persistent sessions using environmental management suites such as yourselves, no?

Personally I will not rush into giving corporate users with admin rights on their machines full Internet access. Its a virus/hacker honey pot.  

So if we separate and keep two environments, one corporate and one personal its still saves us a lot of Opex on the corporate env.

The "personal env" could be one that has been supplied by the company, a BYOC style or your home PC/MAC for home workers.

Access to the corporate virtual desktop could be from the personal desktop (or from anywhere) and I still don't see a reason to have UIA on the corporate desktop.

Adam wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 10:48 AM Link To This Comment

Hey All,

      Just throwing in my 2 cents as I echo the sentiments of many of you. Do we actually want users to be able to install applications as they see fit? I tend to think not. Maybe in a much smaller company. But if your a small company your not running VDI anyway.

Persistent cache disks dont really work because of how they grow. A locked image looks to be the way to go, but we do need some way of retaining the personality of the end user. This would come by way of advanced profile management tools. Citrix has one, VMware is working on one. Both companies see the need to have this component. These profile management tools go way beyond what simple roaming profiles could do. Brian, have you actually implemented Citrix profile management as of yet? Truth here is that when we turn all this stuff on, it gives us as administrators the power over the end users to finally decide what gets saved and what doesnt. You have to keep in mind that VDI should be making our lives easier, NOT more complicated when it comes to desktop management.

A corporate desktop is a tool for the employee to get their job done. They have no right installing whatever they want to on that system so they can waste time goofing off. You want to burn hours everyday playing some silly game, do it when the company isnt paying you.

Also, the idea of the early VDI didnt work for another reason that you missed. Yes there were storage implications for a 1 to 1 relationship or "persistent disk", But you missed a huge one here. If you give the user the right to make changes to the base image ALL YOU DID was move the problem from the desktop, to the expensive data center! Users will corrupt their image, just like they normally do. You think its difficult and a time waster to fix someones regular desktop, try and troubleshoot a Virtual desktop that blue screened from spyware.

Also lets keep in mind that the big VDI companies have seen and recognized "user installed apps" as an issue. I am going to speak a bit of furtures here as no company has actually shown us this yet, but the Type 1 client hypervisor.

Maybe how this will turn out is actually pretty simple. Everyone gets a corporate locked down and completely IT controlled virtual desktop, and (if the person is important enough) gets a personal image that they can do whatever they want to it, that IT does not manage and by corporate mandate, will not fix or repair. This image will have no access to corporate data and can only be used by the end user in their "goof off time"

I think in general. We are extremely close and also there for VDI in general for many companies. going from standard desktops to a controlled VDI image will have some growing pains for the end users, but when the day is done and we as administrators can finally stop putting massive daily effort fixing desktop problems we have done our jobs. We have simplified the environment for both IT and the end user, Driven company initiative and saved the company money with reduced administrative overhead.

Ron Kuper wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 12:23 PM Link To This Comment

Hi Adam,

I agree with most of what you said but I think troubleshooting and fixing a virtual desktop is MUCH MUCH easier than traditional.

That's one of the main reasons why we are all here having this discussion :)

Kimmo Jernstrom wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 2:47 PM Link To This Comment

I really like this discussion and all the comments.

My take on the matter is quite simple. Bring along Client Hypervisor with smart management.

Corporate apps in it's slice of layer, just plain other workloads in another slice, intermingled, co-acting, tied togheter, separated, managed, non-managed and so forth as seen fit for the unique requirements. Choice, for better or worse. Each organization can decide, the what's most important is that the tools of choice will be there.

Shawn Bass wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 3:37 PM Link To This Comment

How many of you out there have an IT policy that prohibits connecting personal laptops, etc. to the corporate network?  If you do, then quit waiting around for Type-1 Client Hypervisors because it won't do you any good.  Even if the corporation supplies the hardware, they still aren't going to want a "personal" VM running on the same network as the rest of the corp network.  Certainly not before those companies have invested heavily in NAC/NAP or some future better quarantining solution.


Dan Shappir wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 4:21 PM Link To This Comment

Another form of layering, in addition to the ones discussed here, is merging applications running at different locations into a single desktop (yes, its similar to Chetan's mesh proposal, which I "attacked", but if you can't contradict yourself a bit then where's the fun :) What we're doing with our connection broker (I work at Ericom) is utilizing Seamless, Reverse Seamless and Unified Desktop (soft of like Fusion) to combine applications from different sources into a single, coherent desktop:

1. Local applications, pre-installed or streamed

2. Applications on a VM, pre-installed or streamed

3. Published applications from TS or VM, pre-installed or streamed

And the fun part is that the user doesn't know which application is coming from where.

Harry Labana wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 8:48 PM Link To This Comment

I find one of the biggest problem is definition which is the source of much of the debate and confusion. I’ve been very upfront about my distain for the term “User Installed Apps” with many, and share many of the security and horrid IT process concerns expressed by others. My opinion here is that it’s the permissions given to the user for their role that makes all the difference.

If you have a locked down environment and users can still install Active X controls etc, then I consider that to be personalization thing more than I do user installed apps. Lot’s of good things can be done here to make personalization richer and more granular within the well managed confines of a well run organization.

If you have a need for the user to have admin rights, then it’s a whole different ball game. Security risks galore if you allow those machines to connect within your network, Malware as a really good example. However there are use cases that are valid. Developers, testing widgets, easily extended to the power user concept. Layers may be a way to address this over time. Isolated environments within an existing network may be another in today’s world. I’ve also seen some people simply manage the time that people can be admin for. In effect become Superuser for the install task transaction and then it goes away after the transaction is complete. Often people think they need to be admin all the time, when in reality I am sure it’s only at times when it’s truly needed, there is a power user role with Windows after all :-)

Type 1 Hypervisors certainly are another way. Regarding Shawn Bass’s point regarding trusting hostile VM’s on your network without NAC etc is a valid one. However as more and more companies start to think about concepts like WiFi they will have to start to think about this. Perhaps treat every end point as hostile and run an end point scan is an idea I’ve thrown around in the past to figure out ways to enable the real world need to allow some people to bring their MAC’s to work which were not officially supported operating systems.

All of the above is different IMO to enabling users to select which application to use. That’s for another time, happy Thanks Giving to all US readers.

Kimmo Jernstrom wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Nov 25 2009 10:02 PM Link To This Comment

@Shawn @harry @all

Take 2.

All of us commenting here are environmentally damaged. We have always, more or less, been in full control of our own computing environment as well as forming the computing environment for others, as best we could, as best we can.

This is quite a contrast to the reality of most of the users that we are to serve. Enough of that, I guess my point is obvious and plain.

What I do not like is when people approach type 1 client hypervisor with dimmed glasses of the past or the colored glasses of new blue instead of clearly seeing the plethora of opportunities for tomorrow.  

This is my critique.

What I see is an opening to overcome some of the problems that have plagued IT – specifically with regards of the limitations of remoting, now we might finally collate the local and the remote to solve some of these issues of past.  

To be clear, I most certainly see how all of this can be managed and purposed for the best of my advice, yet I also see the other possibilities through the glasses of the new blue.

Certainly the marketing goes in the shades of the blue (series XC). I regret that there is no easy option to properly evaluate the alternatives without strings attached, and then I’m not talking about WMware ;-)

Martin Ingram wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Thu, Nov 26 2009 8:51 AM Link To This Comment

@Ron, @Harry, @Adam, @All

You raise some interesting points. Firstly, if users have no need to install applications or regulations do not allow it there is no reason for you to let users install applications. I talk to lots of people in finance and government who lock down users in this way and it has been pretty much mandatory for terminal services implementations. However there are a large number of users in the broader user-base who need more.

That could be just persisting ActiveX plugins between sessions and it was interesting to hear peoples’ perception that this is personalization. There are multiple ways you can look at user environment management; one way is to think about each of the types of data managed; personal settings, configuration, applications, data, etc. Another way is to consider who is making the change to the user environment – is it the user themselves or is something IT is setting up for them. In the first case an ActiveX plugin definitely fits the ‘application’ data type whereas in the second because it is installed following a user action (usually visiting a web page) it could be classed as ‘personalization’.

Beyond the plugin requirement we then get into widely varying needs, in many cases there will be a need for users to be able to install internally developed applications, in others particular commercial applications and in a smaller number of cases install any application. The key here is that application installation entitlement is not one-size-fits-all but a policy that varies across the user base. The alternative of allowing a large proportion of users to install whatever they like is seriously unattractive however being able to allow users to do what they need with lowest impact on IT makes a lot more sense.

Martin Ingram (AppSense).

Chetan Venkatesh wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Sun, Nov 29 2009 11:31 AM Link To This Comment

@Dan Shappir - No direspect intended but it sounds like you're calling the simple mechanisms Windows has to map multiple segments of non contiguous  physical memory to a contiguous logical addressing scheme as "Memory Virtualization".  Also a separation of kernel space and  user space process doesnt  mean processes become wrapped in "Virtual CPU" and "Virtual Memory" because they cant  write/modify pages directly or cant context switch by themselves,

By your own definition then Windows 3.1. OS-2, even MP/M-86 from 1979 all qualify as Modern Operating Systems that "Virtualize".  Hilarious!

As to the  "Right Thing" that Microsoft must do to bring Operating Systems in the 00s?   IMO - they are doing the right thing with Azure - a data center fabric-OS designed from the ground up for multi-tenant workloads.  

But not Azure nor Mighty Microsoft can resolve the fundamental math behind determinism and non-determinism. In this universe - non-deterministic problems like patching don't lend themselves to being solved by Microsoft doing the "right thing". There might be other Universes out there where they do and conceivably one can always move there when Microsoft doesnt do the right thing anytime soon (because it cant). The inter-verse spaceship could be powered by the vapor emanating from blog posts about non-existent yet soon to be released layering products and could achieve faster than light speeds with the right mix of tweets and RTs by the exec team.  But I'm digressing ......  

Pierre Marmignon wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Sun, Nov 29 2009 4:55 PM Link To This Comment

My 2 Cents on this subject ;)

As @JimMoyle already mentionned while commenting the previous article, if we want a mass adoption of the new "Centralized Desktops" (and I'm using this term to gather the whole abailable cases), then we do need to give users something they'll adopt.

And, as I wrote in an earlier blog post, the Desktop reality is really complex and is not the same for each company.

For sure, typical Task Workers won't need any extra application and could deal with a shared image resetting at logoff along with personnalizations saved, but then ?

High Level Executives ? IT Staff ? Developpers ? CEOs ?

Depending on the company, they may have or may not have the right to install applications.

I totally agree that it's giving headaches to the CTOs but that's in place and a new solution, if chosen, will then have to deal with these particular cases.

Why ? because if the C-Level execs wants an iPhone, they'll get an iPhone. And then ? the IT Staff may not want to package iTunes along will all updates and let them manage it !

To be able to get a mass adoption, a solution must handle all cases, even if sometimes that's not conforming to Best Practices.

Of course users installed apps should be controlled and allowed by the IT policy.

Of course such a solution will be allowed for only few specific users.

But as of the today's desktop reality, sometimes specific users have the right to do so and a new solution should not forget it.

That may be a technical dream, but every time I've done a pre sales meeting regarding virtualizing desktops, the question of users installed apps was raised (because it was leading to 1:1 images), and when talking about it, customers told me that even if it's for a few people, they do need such a feature to be able to match all the existing Desktop Scenarios they have.


appdetective wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Sun, Nov 29 2009 10:39 PM Link To This Comment

I'll agree with the above posts that user installed apps are a joke for the masses,  although I get it as a niche use case. That said @pierre, when faced with the responsibility to sign off on risk for user installed apps so they are personally liable for malware etc that is introduced on a network without a compelling business reason I usually find these people back off. Of course there may be valid reasons and above there are some good process suggestions to deal with that.

Also @Kimmo mentions Type 1 to solve for this. I don't buy this is going to happen anytime soon to make it matter. Just so many factors to consider, I'll post more soon.

Dan Shappir wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Mon, Nov 30 2009 2:22 AM Link To This Comment


You may call it hilarious, but I do call what Windows (and Linux and Mac OS X) does for processes "virtualization". In Windows processes are isolated from each other, each seeing its own independent address space, and dedicated CPU (or CPUs). This is very much like what hypervisors do for VMs, and this similarity is the rational of KVM. (Obviously hypervisors and OSs also do a lot more.) The fact that this type of virtualization existed since the 70's doesn’t make it any less so. In fact, virtualization is older than that - as Gabe writes in his latest post, virtualization has been around since IBM's CP/CMS in the 60's.

(BTW Windows 3.1 is a bad example - it ran all processes in a single address space and lacked preemptive multitasking. This made processes very much aware of the fact that they were sharing resources, hence not virtualized.)

I defiantly see your mesh concept as an extension of services provided by the OS, if only for the reason that you describe it as operating at the application (process) level. Maybe we will need to move to a parallel universe to get it from Microsoft, but I highly doubt anybody else could do it seamlessly enough for all Windows applications. (Brian Madden wrote a very interesting article about what Microsoft can and should be doing for virtualization inside the OS more than a year ago. Check it out: www.brianmadden.com/.../can-microsoft-quot-change-the-game-quot-with-terminal-services-over-the-next-five-years.aspx ).

In the interim companies such as yours and mine will continue to provide value in this space by filling in the gaps in the virtualization puzzle.

Shawn Bass wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Wed, Dec 2 2009 7:14 PM Link To This Comment

@Harry - If people were smart about WiFi deployments, they'd be sticking the WiFi connections outside of the network and requiring both VPN'ing and endpoint analytics.  But maybe that's just because I spend a lot of time in Financial Services.


Harry Labana wrote re: A technical explanation of why the whole “layering” / shared image thing is so difficult
on Thu, Dec 3 2009 7:53 AM Link To This Comment

@shawn. Agree, treating everything as hostile is a good way to deal with WiFi in highly regulated industries. I see not reason to do that for elsewhere to enable more BYOC type devices, irrespective of Type 1 hypervisors.

(Note: You must be logged in to post a comment.)

If you log in and nothing happens, delete your cookies from BrianMadden.com and try again. Sorry about that, but we had to make a one-time change to the cookie path when we migrated web servers.