An introduction to VMware View 3, Part 2 of 3 – Linked Clones

A look at the new features of VMware View 3, as well as best practices learned while doing a deployment for a customer. Part 2 (this article) looks at Linked Clones

In this three-part article series, Roland van der Kruk, a freelance consultant in The Netherlands, takes a look at the new features of VMware View 3, as well as best practices learned while doing a deployment for a customer. Part 1 provides information and insight on new features, Part 2 (this article) looks at Linked Clones, and Part 3 (released later this week) will look at special considerations and best practices for deployment.

Linked Clones

The big question to most people is probably: ‘What are linked clones and how do they work?’. Some of you may expect similar functionality to Citrix Provisioning Server where optimization in disk space can be significantly realized, and indeed VMware does somewhat the same, but with very different technology. Let’s see how VMware does it.

The essence of linked clones is Thin Provisioning; saving on expensive storage cost. Thin provisioning with View 3.0 can be realized using a “master virtual machine”, which is just a regular virtual machine that you create and then take a snapshot. That virtual machine will be used as the basis for rapid and thin OS deployment. Please notice that I mentioned a virtual machine “snapshot”, not a virtual machine “template”.

You prepare a virtual machine with the Desktop OS of your choice (Server Operating Systems are not supported) exactly the way that you like your master image to be. When all components and settings are properly set, you then have to install the VMware View Agent (which contains the components mentioned in the previous article), shut down the virtual machine and take a (first) snapshot. I might add that the master virtual machine has to be domain joined, for which I could not find the reason. After that, desktop deployment can start.

In the View Administrator console, choose the ‘Desktops and Pools’, as this is where desktops and desktop pools can be added and/or edited. In the right pane of the ‘Desktops and Pools’ tab, five other tabs appear, the most left being the ‘Desktops and Pools’ view. Here you can choose ‘Add’ to start a wizard that guides you through the steps necessary for adding a desktop or a desktop pool. The following choices are presented:

  • Individual Desktop, this option will start a wizard to provide users with access to a single virtual or physical computer on which the View Agent is installed.
  • Automated Desktop Pool, this option starts a wizard to automatically create one or more desktops in a pool. The explanatory text for this option states that desktops are based on “virtual machine templates,” which is wrong.  You need to have a normal virtual machine from which you will take a snapshot (as mentioned above). 
  • Manual Desktop Pool, this option will start a wizard to provide access to an existing set of virtual or physical PC’s that have the View Agent installed.
  • Microsoft Terminal Services Desktop Pool, this option starts a wizard to publish Terminal Server desktops to View Portal users.

I don’t want to get into details with every option mentioned, but continue with the most eye catching option, the Automated Desktop Pool. The automated desktop pool can consist of any number of persistent or non-persistent desktops.

After a persistent desktop pool is created and a user is assigned a certain desktop, the mapping between user and assigned desktop is written to the ADAM database (see Part 1 for more information on how ADAM is used). Every time the user logs on to the View Portal, the same desktop will be available and the state of the virtual machine is exactly the way he or she left it with the previous logoff. This option is similar to the ‘permanent disk’ in Citrix Provisioning Server. A persistent desktop pool can contain any number of desktops, and once created, the pool can also be edited to increase the number of desktops in the pool. In the wizard, as depicted below, the initial number of desktops to be created is set to 5, the total number of desktops in the pool is set to 100 and as soon as the number of available desktops falls below 5, the number of available desktops is matched to meet the configured criteria by creating more machines in the pool, until the maximum number of desktops in the pool is reached.

Picture 6 – Advanced configuration of the number of desktops in a pool in the Deployment Wizard

Both persistent and non persistent desktops can be created using the ‘linked clone’ technology, which in fact means that deployed desktops can be altered by assigning the desktops to a different snapshot or even to an entirely different virtual machine. The main difference between a persistent and a non persistent desktop is that persistent desktops can contain a second virtual disk to which the ‘Documents and settings’ folder is moved. User data is effectively put on another disk, so in case an administrator decides to assign a different snapshot or image to a user, all user data in the ‘Documents and Settings’ folder will still be available. Of course, this can also be accomplished by modifying the User Shell Folders of each user with Active Directory GPO or script to alter all default folders, but with the View 3.0 option, user data will be locally available, presumably resulting in better performance.

I wonder if this is really a useful option, as user data can only be reached by going to the machine itself and opening the folder, whereas with folder redirection, all user data can be redirected to a central network share, substantially simplifying central administration, in my opinion. If the central network share is located on fast NAS heads, performance might still decrease a little, but management of user data only locally available on virtual machines is not a very attractive option in larger environments.

Picture 7 – A separate disk for personal data, available in a linked clone.

What actually happens as the wizard is finished is that a copy of the master virtual machine is made, together with a copy of the snapshot. The size of the copies, however, is not a complete copy of the master virtual machine. I deployed a master image with a system drive of 20 GB with a snapshot, which resulted in a copy of 6 GB for the system drive and a few Kb for the snapshot.

Picture 8 - User data drive of a persistent desktop for a specific user.

The folders and disks are automatically created and the folders and files contain some GUID that is associated with master desktop and user.

To (hopefully) clarify the components, the following Virtual Center folder arrangement is depicted:


Picture 9 - Virtual Center containing all folders necessary for a View 3.0 deployment.

The above picture shows that

  • VMware virtual machine templates can be used to deploy master images
  • Master images with at least one snapshot are best placed in a separate folder to make sure you don’t mix things up
  • Linked Clones are best placed in a separate folder, where subfolders can be created to place non persistent and persistent linked clones
  • You can (and probably will) have other virtual pc’s or virtual servers in your Virtual Center
  • On the bottom of Picture 9 the automatically generated folders are shown, which are all created by View 3.0 as a result of a desktop pool deployment wizard in the View Administrator console. A replica folder and a source folder are created for each desktop pool that uses linked clone technology. All folders created automatically are fully managed by View 3.0 and are only to be administered through the View Administrator console.

Linked Clone disk characteristics

So, how does View 3.0 handle disks and disk space for linked clones?

In my tests I created a Windows XP SP2 image with a system drive of 20 GB. In the Automated desktop pool wizard, I chose to configure 5 linked clones, where initially 1 linked clone was created immediately after finishing the wizard, and where always 1 desktop would be available for new user logon until the maximum number of desktops in the pool has been reached. Also I chose to create a separate User data disk of 2 GB for the ‘Documents and Settings’ folder to be placed.

Picture 10 - Step in deployment wizard where OS Data and User Data stores can be selected with.

After finishing the wizard, a replica folder and a source folder are created which are used as templates, of which clones are created by View 3.0

Picture 11 - Replica folder of an automated, persistent desktop pool, derived from a 20 GB system disk

Picture 12 - Source folder of an automated, persistent desktop pool with a configured user data disk of 2 GB

Picture 13 – System disk of a linked clone, available to an end user using a system disk of 20 GB

Picture 14 – User data disks, mapped as D-drive in the users’ virtual desktop, for two users with a maximum of 2 GB per user

In the table below, all components are mentioned to deploy at least one desktop pool based on one Desktop Operating System. The ‘linked clone system disk’ will initially be around 100 MB and can grow up to the original size of the Master VM. A Desktop Refresh (discussed below) can be scheduled or executed manually to return the linked clone system disks to its’ original size.

System disk of Desktop OS template, used to create ‘Master Image Virtual Machines’

20 Gb

System disk of a ‘Master Image Virtual Machine’, containing a Desktop OS including (a) snapshot(s)

20 Gb

Replica folder and source folder derived from the ´Master Image´, created for a desktop pool with an unlimited of linked clones

6 Gb

Linked clone system disk per OS

100 MB - ??

Linked clone user data disk per user

2048 MB (configurable)

Table 2 – Linked Clone disk size example

Desktop recompose, refresh, rebalance

At all times, deployed desktops can be altered when created using the linked clone technology.

A Desktop Recompose means that a deployed desktop state is altered. It can be assigned a different snapshot of possibly even entirely an different master virtual machine.

A Desktop Refresh means that a linked clone desktop is brought back to the state of initial roll out. This actually means that the system disk is reverted to the moment it was deployed, including its size and contents. If a separate user disk was used in the deployment wizard, all user data on that disk remains intact.

A Desktop Rebalance means balancing virtual machine disks across available data stores (LUN’s). If a VMware ESX data reaches its capacity, a rebalance can take care of automatic data migration of deployed virtual machine disks to different ESX data stores.

Picture 15 - View on a persistent desktop in the ‘Persistent’ desktop pool, which can be removed, reset (OS reset), edited (recomposed or refreshed) or rebalanced

Linked clones, persistent desktops and OS maintenance; 1 + 1 + 1 = 1?

Another thought came to mind worth mentioning. In my test I created a desktop pool with the combined technologies of linked clones and persistent desktops. Of course on these desktops, I do have to perform maintenance, as Microsoft hotfixes come out the second Tuesday of the month and who knows what else needs to be updated. Initially I thought I could use the linked clone technology for this; update my master virtual machine with hotfixes, take a new snapshot and link all deployed desktop to the new snapshot. If all is well this will work, however, what happens to my ‘persistent desktops’ if I do that? In fact, all users having made changes to the OS (I chose to allow certain users to install their own applications) lose their OS customizations and their applications.

After linking desktops to a new snapshot, it appears that the only thing that is really persistent about the ‘persistent desktop’ is what is on the user data disk, which contains the ‘documents and settings folder’ and maybe some data, but not the entire installed application the user needed. Ergo, if I want to maintain my OS with hotfixes using linked clone technology or ‘Desktop Recompose’, while at the same time keeping users’ customizations to the OS, I will have to use a tool like SMS/SCCM, Radia or whatever your standard corporate application distribution method is. My question then is: what does ‘Persistent Desktop’ really mean?

I performed one more test to see how intelligent the linked clone snapshotting technology really is when it comes to managing disk space. I started off with a Persistent Desktop:

- System disk: 230 MB

After I logged on as an administrative user, I copied an installation of Eclipse, sized 354 MB, to the System disk of my virtual machine.After the file copy, my System disk looked like this:

- System disk: 607 MB

I decided to delete the Eclipse folder. After deletion, the system disk looked like this:

- System disk: 607 MB

Conclusion: The Eclipse folder doesn’t seem to be deleted and the data is still available in the snapshot.

I decided to copy the exact same Eclipse folder again to the same destination on the system disk, which then looked like this (I also tested another destination; c:\temp, which had the same result):

- System disk: 623 MB

Apparently, some check was done as the linked clone disk reused the data that was marked as ‘deleted’.

After I removed Eclipse again, the system disk looked like this:

- System disk: 640 MB

Now since Eclipse is deleted off disk and the system disk still has the size of 640 MB, which means the data is still there, maybe the snapshot technology is intelligent enough to mark the space as deleted so it can be filled up with other data. I copy some other data to the system disk that is smaller than the size of the data that could be ‘marked for deletion’. After copying a 219 MB folder, the disk looks like this:

- System disk: 852 MB


  1. Providing linked clones to users that have full control to the system, resulting in user initiated changes to the OS like copying data, removing it, etc., will end up in a system disk that eventually has a bigger size than if the OS was provided to the user without the linked cloning technology.
  2. If a View Administrator decided to refresh the OS because he added some hotfixes or extra software, all user modifications to the OS are deleted. In fact the System Disk is simply deleted and a new linked clone is generated off the new state of the ‘master image’.
  3. What ‘Persistent desktop’ actually means is that the state of a disk provided by a View Administrator is ‘persistent’. A desktop can be made persistent by recomposing or (scheduled) refreshing the deployed linked clones, resulting in exactly the state that a View Administrators expects it to be. From the view of end users using Linked Cloned Desktops, no persistence can actually be guaranteed, because all user actions will be undone by ‘Desktop Refresh’ or ‘Desktop Recompose’.
  4. As soon as user modifications to the System Disk need to be persistent, no linked clone technology should be used. Instead, 1-on-1 desktops need to be provided, in which deployment tools like SCCM or Altiris will have to be available to maintain the system.

Roland van der Kruk is a freelance consultant in The Netherlands. He currently works with server-based computing and desktop delivery solutions. Roland can be contacted by email at or through his website at

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Well Roland.. I read and read.

And my face went




Unless im missing something...this technology is a big let down.... Prove me wrong people :-)

I guess more value could be achieved via single instance SAN software...



Think I am sticking with Xen Desktop with Provisioning Server.


@Jeff - I don't understand, how is this inferior to Provisioning Server?

Also note that Roland's point #1 is incorrect - the thinly-provisioned disk will never grow to a size larger than what was allocated - a thinly provisioned 20GB disk will only ever take up 20GB. I posted a response to this article on my blog.



your statements are correct and more specific than mine. All user data can be kept on a separate disk; what is not persistent for users are OS/system disk changes. %userprofile% can be placed on the user disk, although I wonder what the advantage there would be if compared to storing that on the network, where it is centrally manageable; maybe performance? User data on a local virtual disk is not preferred in my opinion.

About the size of the snapthos/linked clone, I did test the max size of a linked clone system disk because of your input. It makes sense that its’ size will be limited to the configured size of the vitual system/os disk and testing showed me that a system disk of 20 Gb will in fact be full with an actual vmdk disk size of around 15 Gb.

Thanks for clarifying that.


What about creating Automated Desktops but non-persistent? You would then use GPO, folder redirection, and profile managment to keep all data outside the VM, and all user settings would either be managed by a 3rd party (Appsense, RTO, etc) or generic, mandatory profiles. This would be a good solution for task workers, and the linked clones would keep disk space down. Previously with Vmware's product, even this 'generic' desktop still ate up lots of space because every desktop was deployed fully from a template. You can also keep apps off the desktop images as well and deliver them as Thinapp packages located on a fileserver. The only recomposing you need to do would be for OS patches.



indeed for OS patching and other security updates (anti virus) you will need a solution to be able to deploy them, using standard (msi/sccm/altiris) tools. Non persistent automated desktops are very well possible and would work fine with virtual applications. You can configure to delete a VM at user logoff and create a new instance so every user gets a fresh OS. That certainly is a major step that people might not have expected to be included with View 3.0. Only in certain situations where users need to have modify permissions to the OS, linked clones are not really an option, but who knows where this whole thing goes with VDI. I still didn't see the cost side of it, but maybe it will be interesting enough to make companies decide to go for central desktops instead of fat clients.

Of course the user management is not at the level that Citrix XenApp offers, so there is a gap too, but that will hopefully be filled as well by VMware, or a 3rd party product