The Devil is dead! Slaying the primary barrier to large scale VDI.

The rise of multiple vendors trying to solve the two major challenges associated with VDI: Eliminating the need for Persistent Desktops (solving the application challenge) Reducing the IOPS and storage resources required for VDI.

Nick Rintalan and I are just back from another successful BriForum conference where we delivered several sessions.  One of the reoccurring themes we have seen at the past several BriForum events as well as other industry events like Synergy is the rise of multiple vendors trying to solve the two major challenges associated with VDI:

  • Eliminating the need for Persistent Desktops (solving the application challenge)
  • Reducing the IOPS and storage resources required for VDI

One of our sessions at BriForum was entitled “VDI High Availability and Why Persistent Desktops are the Devil”.  If you attended BriForum 2014, but did not catch our session, I encourage you to watch the video on  If you did not attend Briforum, then you will have to wait until they get posted to YouTube (probably early next year).  However, I will try and recap some key points of the session as it relates solving these two challenges.

I am not going to dive too deeply into why these challenges make deploying VDI difficult on a large scale as this has been discussed many times in great detail.  There have been quite a few passionate discussions about whether one should do persistent VDI, non-persistent VDI or even just stick with XenApp/RDS desktops.  Here are some links to some of those discussions.

The Devil in the Datacenter

As it relates to VDI, the persistent desktop has become the devil in the data center.  Here is a quick summary as to why.

Persistent vs Non-Persistent VDI  

In an ideal world we should be able to give users a clean gold image of Windows 7 and dynamically layer or deliver their applications and personalization settings on demand.  This non-persistent approach significantly reduces cost and complexity and facilitates high-availability. If designed properly, a user could connect to Win7 VM #1 today and then connect to Win7 VM #99,999 tomorrow and it should not matter.  Users get the same personalized desktop experience with their applications, their profile settings, and their data no matter which desktop to which they connect.  

The challenge is that applications have proved to be difficult.  Coming up with one (or maybe a few) gold images with the correct base applications as well as using App-V, XenApp/RDS, ThinApp, etc… to deliver the non-base applications has some challenges.  As a result, many companies have not successfully made the transition to non-persistent VDI and instead decide to implement assigned persistent VDI.  

Unfortunately, this approach has major draw backs turning it into the devil in the data center.

  • Storage Meltdown! Persistent VDI VMs are storage hogs in terms of space as well as IOPS consumed. Windows desktop workloads in their traditional form were not meant to be run in the data center.  This means we need to have expensive SSD storage or 3rd party software solutions to deal with the problem. We have SSD, thin provisioning, data deduplication, write coalescing, etc… to try and reduce the storage impact of persistent VDI.  Unfortunately, all of these new vendors and their products and technologies are simply treating symptoms caused by persistent VDI and not the real problem, which is persistent VDI!
  • CPU Hog!  In addition to being a storage hog, persistent VDI is also a CPU hog. When Windows updates are pushed out, they are pushed out individually to each VM.  That means when you need to install a Windows or Office service pack and you have 10,000 VMs, you will run that install 10,000 times.  That is not only a massive storage hit, but also a CPU hit.  Worse yet, can you imagine what happens when you need to roll a hotfix back?  You get hit again.  Additionally, most companies insist on anti-virus conducting full scheduled scans of all disk drives on a persistent Windows VM.  This means extra CPU overhead for this task as well.  With non-persistent VDI, you update the gold image once and everyone gets the new version.  Also, if you have a read only gold image, you can run a scheduled AV scan once per gold image instead of once per VM and simply configure your non-persistent VMs to scan only on actual read/write activity.
  • High Availability. Persistent VDI cripples your ability to provide a highly available solution.  With persistent VDI, you are locking a user to a single VM, on a single LUN, on a single SAN, attached to a single VLAN, hosted on a single hypervisor cluster, on a single hypervisor platform, in a single data center, hosted in a single VDI brokering system.  If anything goes wrong with any component in the stack, the user cannot access their VM.  With non-persistent VDI you can have multiple independent VDI stacks across multiple hypervisor platforms and potentially across multiple data centers and you can dynamically connect the user to any VM and they will still have their personalized desktop.  This is the only way to achieve true high availability and scalability when dealing with tens of thousands on VDI VMs.

I could go on and on with myriads of other challenges associated with large scale persistent VDI, but I think you already get the point.

Exercising the Demons

I am happy to say that the technologies exist today to finally get the devil out of the data center! Here’s how you can do it!

The Death of IOPS

There have been numerous improvements over the last year or two in getting control over the IOPS generated by VDI.  Some these solutions focus on using fast SSD storage so that the storage can actually handle the IOPS.  There are many solutions out on the market today that address this with hardware such as Whiptail, Pure Storage, XtremeIO, Tintri, etc… You can take your pick as they all do very similar things and offer great performance.  Traditionally, these have been expensive solutions; however, the price has begun to come down on these hardware solutions and they will only continue to get better and cheaper.

There are also software based solutions that can solve the IOPS challenge today as well.  With the ever decreasing cost of RAM and the increasing RAM density in servers today, it makes sense to use existing RAM within each Hypervisor host to solve the IOPS challenge. Atlantis ILIO has a fantastic solution that has been on the market for several years now where they address the IOPS challenge with a pure software solution using resources within each host.  Additionally, with the release of Citrix Provisioning Server 7.1 (released with XenDesktop 7.1 and later) we have a new feature called RAM Cache with Disk Overflow.  This feature basically solves all of the IOPS performance issues associated with non-persistent VDI with just a small amount of RAM and a little bit of in-guest write coalescing. I encourage you to check out the following links for details on both the ILIO and PVS software based solutions:

So with purchase of Flash/SSD hardware or with a software solution that leverages existing hypervisor resources, we can completely solve the IOPS challenge today in a cost effective manner.  However, in order to make the most of some of these IOPS killing features and to get maximum scalability and availability when we start talking about tens of thousands of VDI VMs, we need to commit to using non-persistent VDI.  So let’s show you how you can make that a reality!

Replacing Persistent VDI with Non-Persistent VDI

After 16+ years of helping hundreds of customers across every industry around the world deploy Citrix End User Computing (EUC) systems, I can honestly say that the requirements of 80% of all users could easily be met with either a non-persistent XenApp/RDS session or a non-persistent Windows desktop.  All it takes to make this a reality is proper use of the following three strategies.

1. Know your users and their applications.  

This is the most important and most overlooked item. It seems every customer gets stuck in “analysis paralysis” and simply does not know how to remediate their applications.  They end up doing a basic audit of what is installed and come up with some crazy number of 1000+ unique applications.  They see this massive list and simply have no idea how to start tackling it.  I have helped many customers through this process and invariably what we find is that 80% of their users really only use about 20% (or less) of the total number of applications.   Making VDI and thin client computing a reality for 80% of your users does not require remediation of all of the applications. You need to spend the time up front actually meeting, speaking and physically working with your users to figure out what is truly needed.  You will often find that simply remediating an additional 10 – 30 applications will often get you to the point where 80% of your users can be fully serviced by a non-persistent desktop.

You should start with the easy users like secretaries, warehouse workers, office management, support staff, etc… However, every customer wants to start VDI by picking their most difficult set of users.  Instead of starting with basic office/knowledge workers or task workers, they pick their engineers, IT developers, financial traders, traveling sales people, executives, etc…  These high end, high profile users often account for less than 20% of the actual user base, but they are the ones that are most demanding and have the broadest application set.  Don’t worry about the 20%.  They can stay on persistent VDI or stay on fat laptops/desktops.  If you can get 80% of your general workforce on non-persistent VDI or an RDS Desktop, that is a major win and worth moving forward!  

2. Use existing tools and methods to deal with applications.

We have many tools and methods for delivering applications dynamically into non-persistent desktops.  Some of these tools include App-V, Thin App, XenApp/RDS, etc…  These are proven tools that work great in a non-persistent environment.  These tools are not perfect and do not address every requirement; however, remember that you do not need to remediate all applications.  You are trying to remediate just enough applications so that you can get the requirements of 80% of your users met.  You will often find that with the right combination of placing applications into the base image, virtualizing apps with App-V/Thin App and hosting certain applications on XenApp/RDS, you can meet the requirements for 80% of your users.

Additionally, it is perfectly acceptable to have more than one gold image.  Everyone wants to manage one gold imagine, but is it really that difficult to manage an extra two, three or four images?  If you can’t manage three of four gold images, then IT might not be the right career for you.  I would caution against going crazy with creating multiple images. While it is OK to manage a few additional images, you don’t want to end up with 100 different gold images.

3. Implement a User/Admin Installed Application Technology.

If all else fails, there are other technologies on the market that let you dynamically attach a virtual disk to a non-persistent VM so that applications can be installed either by the user or by an administrator.  Citrix Personal vDisk (PVD) was an example of such a technology; however, it has a fatal flaw as it mounts the user’s vDisk at system boot instead of user logon.  This means that with Citrix PVD, you must still assign the user to a specific VM because their personal vDisk is physically attached to only one VM.  For this reason, I cannot recommend using PVD.  

However, there are other solutions that will dynamically attach a user’s personal disk at logon instead of at system boot.  This type of solution is much more flexible because it allows the user to logon to any non-persistent VM, but still get their personal applications.  One example of such a technology that I would recommend is CloudVolumes 2.0.  With CloudVolumes 2.0, you can give the user a VHD file that lives on a CIFS share and when the user logs on, it will dynamically mount the VHD file instantly merging/blending the C: drive of the non-persistent VM with that of the VHD file in the CIFS share, thus providing the user with everything available within the base gold image as well as any one-off or other applications that were installed into the user’s personal VHD file.  VHD mounting from CIFS (especially with SMB 2.1+) is a fast efficient method that fully leverages all the native guest and file services caching mechanisms.  It works beautifully and scales well!  Citrix had a similar VHD mounting technology with our Citrix Application Streaming feature of XenApp 6.5 and it would have been easy for us to implement user installed apps with our Application Streaming technology. Unfortunately, we killed Application Streaming, so now you need a third party solution like CloudVolumes.  I encourage you to check them out.  Here is a link to a great webinar about CloudVolumes.

In addition to CloudVolumes there are other 3rd party approaches to handling the application integration challenge.  Another new company that comes to mind is FSLogix.  Instead of giving each user their own VHD, they have a unique solution that allows all applications to simultaneously be placed into a single gold image.  The applications that are visible from within the gold image are controlled based upon user/group permissions.  I encourage you to check them out as well. 

If the above strategies are leveraged, most organizations will be able to address 99% of the barriers that forced them to implement persistent VDI!  Combining IOPS killing features of new SSD hardware or software such as PVS 7.1 and Atlantis ILIO as well as proven application remediation strategies along with technology such as that of CloudVolumes or FSLogix, you can deliver fully functional, highly available, cost effective non-persistent VDI for the majority of your users today!

The Future

I called out a few specific products that today can solve the major barrier to VDI adoption, but the reality is that this style of VDI delivery will become the standard solution for delivering VDI and will eventually be available across all vendor platforms. I fully expect (and as consumers you should demand) that the storage capabilities of PVS and ILIO become standard features of every major hypervisor. We can already see the hypervisor vendors moving in this direction; VMware with VSAN and Content Based Read Cache (CBRC) as well as Microsoft with CSV Caching and SMB 3.0 shares are getting close, but still not quite there.  It will not be hard for them to solve this challenge and I think they will. Here is what they can (and hopefully) will do.


VMware simply needs to add two new features as follows:

  • Write Caching.  As part of the VMware Tools or as a virtual hardware device managed by the hypervisor, VMware could easily cache and coalesce write operations for non-persistent VMs.  Can you picture a virtual write cache enabled RAID card with configurable memory amount that could be assigned to each VM?  That would be great!
  • Shared NFS Datastores.  What we need is a read-only NFS datastore that can be used to store gold images. This datastore should be able to be globally shared across multiple vCenter and Cluster instances. This should not be that hard to do.  This would allow you to drop a VMDK file or update a gold VMDK file in an NFS volume that is shared by many, many separate vCenters.  Since it is read only, all of the VMs cloned off this gold image in the shared NFS datastore would put their differential disks on local or shared storage that is unique per VMware cluster.   This would simplify the process for rolling out new images across large VDI environments that have multiple vCenters and clusters

You combine these new theoretical features above with CBRC and VSAN and you have a killer non-persistent VDI system.

Microsoft Hyper-V

Microsoft has introduced two new features that are helping them move in the right direction as well.  They have Hyper-V caching of disk operations from Cluster Shared Volumes (CSV) and they have the ability to host VHDs on SMB 3.0 shares.  The CSV read caching is similar to VMware’s CBRC technology.  However, this is not currently enough.  What Microsoft needs is basically same capabilities that VMware is currently lacking.

  • Write Caching.  Microsoft needs to implement a write caching and coalescing driver for their non-persistent VMs just like I described for VMware above.
  • Hyper-V SMB 3.0 Caching. If Microsoft would enable Hyper-V to cache reads from SMB 3.0 shares and would allow multiple SCVMM instances to read a gold image from the same SMB 3.0 share, they would have something similar to the shared NFS datastores I described above for VMware.


XenServer tried to fix this issue years ago with a feature called IntelliCache.  Unfortunately, IntelliCache was a major dud due to it requiring the placement of SSD drives in each server.  This hardware requirement as well as the need to use Citrix Machine Creation Services (MCS) is why I never recommended or endorsed this feature.  The reason IntelliCache is less than stellar is due to the limitations of a 32-bit Dom0.  This major Achilles heel in XenServer is why memory cannot be used to solve this problem.  I believe there is light on the horizon for XenServer as there are plans to finally revamp Dom0 and make it 64-bit.  Once that happens XenServer will also be able to add the same native enhancements that I listed for both VMware and Hyper-V.

User/Admin Installed Apps

While new technologies from companies such as CloudVolumes and FSLogix do the best job toady of providing the ability to dynamically deliver applications into a non-persistent VM, I fully expect other vendors to mature and offer this type of technology as well.  It will not be long before such capabilities ship as a core feature of Microsoft, Citrix, VMware or some other vendor technology.

In summary I think the future of VDI is bright and 2014 is the year that the technologies are finally mature enough to deliver large scale non-persistent VDI to tens of thousands of users in a fast, efficient and cost effective manner that addresses the majority of all user requirements.  

I would love to hear your thoughts and I wish you success as you look to leverage VDI as a tool for your business!


Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Persistent desktops require less administrative overhead compared to non persistent desktops.

Your point is right that persistent desktops hog lot of storage.

But new development in Storage deduplication has reduced that effect. The best example is EMC Xtremio inline Storage Deduplication. In case of VDI the inline deduplication ratio is very high. In fact EMC is running 1 Million USD Prize for the same.

with Flash Storage, Inline deduplication i think Persistent desktops make more sense.


I don't understand.  Have you ever heard of Unidesk.  They help with just about all of the issues you have raised in this article.  If you leverage Unidesk with a dedupe array you can have thousands of "persistent" desktops running off a single SSD array such as the Cisco/WhipTail Acela Dedupe Appliance.


I'm with mattkr.  Have you heard of Unidesk?  We provide over 400 persistent desktops with one single gold image yet do not have all of the storage overhead you say persistent desktops require.  I also don't believe you could have Tier 1 level helpdesk staff successfully using App-V, Thin App, XenApp/RDS to layer applications like my staff does using the Unidesk tool.  I agree with you that the future of VDI is bright but I think you need to acknowledge that there is a mature tool currently available to handle the challenges you outlined in your column.


I echo what camertens states regarding Unidesk.   Unidesk allows you to provide a full, persistent desktop to each of your users, without adding additional storage overhead for each one.   Each application layer you configure and the gold image, are shared from a single instance, so there's no duplication of it for each machine.  

It allows users to add their own applications, if you give them permission to.   Someone wants to use Notepad++ instead of Notepad?   No problem.

They can store their files on "C:\MyFiles", if they so choose.  I still use folder redirection for all I can, but there's nothing stopping them from storing on the C: drive.  

They can change their local settings without requiring you to deal with roaming profiles.   No more, "Oh, crap, the profile is corrupt, let me recreate it and then you can reconfigure everything" moments.  

It's a full desktop experience, but it's through VDI.  You should check it out.


I really love Unidesks solution. To me, for the last few years it was the only viable option to get a truly 100% non-persistent VDI environment off the ground. I'm just in the midst of checking out CloudVolumes too. It would of course be great for everyone, for competition in that space.


Unidesk:  Persistent VDI Experience - Single Gold Image Management - 100% App Compatibility - Incredibly Small Storage Footprint

See how here:


I agree that if there is already a good desktop management strategy and toolset in place (that is a big if many times!) and you are rolling out only a few hundred or few thousand desktops, then you can go the SSD route and it can work OK. However, whenever you try to scale that to tens of thousands of concurrent desktops, it really starts to fall down and cost really starts to go up. Also, you are still stuck with tying one user to one VM and this makes O&M significantly harder. I have worked with customers who went the persistent desktop route and going over 5K VMs becomes a nightmare for many of the reasons I listed in the article.  

I truly believe that layering technology is the way to go here. I think Unidesk has a cool layering technology; however (correct me if I am wrong), it still anchors the user to a specific VM. That is a major Achilles Heel and it sinks both Unidesk and Citrix PVD when you start trying to roll out tens of thousands of desktops. That is why I like the CloudVolumes layering technology. They give you the ability to attach a personalization layer to any non-persistent VM. When you deliverer large scale VDI to 50K or more users, you do not want to care if any particular cluster, hypervisor, LUN, VLAN, etc… is up or down. As long as the CIFS share hosting the personalization layer is available and any VM on any host is up, the user gets a desktop and gets their personalization.  




Wow. Unidesk love in the comments here.

Anyway, Yes, TODAY we assign the personalization layer to a machine, thus a specific user is connecting to the same machine each time. And while dynamic assignment of the user personalization will happen, I think what is missed in this discussion is WHY we assign that layer (or any of our layers) to the machine, pre-boot, in the first place. There is a downside for sure (which is why we are working on the dynamic side for apps and pers layers) but the benefits out weigh the drawbacks at this stage of the VDI game (in my opinion).

ALL of our layers are assigned to the machine pre-boot today. The key here is that this allows us to handle services (even boot time services) kernel devices/drivers, and even Windows changes that are pretty deep in the OS. Layering from boot was always about getting as close to 100% app compatibility as possible.

Of course the Personalization layer is just one of the layers in this stack and the pitfall of not doing it pre-boot and having a deep knowledge of what is in the layers means you can never differentiate between apps (and even user layers) that require pre-boot changes vs those that dont... Thus allowing you to do both live/hot attachment of a layer AND knowing when it must be attached pre-boot (and doing that) to prevent app and OS failures.

Simply assuming that attaching a personalization layer to a desktop while the desktop is running and all will work well, (or more specifically an application layer with the same type of live post boot attachment) could be a recipe for winding up with some of the same problems that existing app virt technology has had with drivers and services.  

Then of course when we talk about scale, the idea of 50,000+ Users (to use your number) attaching to CIFS shares to do actual IO from a VM (reads and writes..., remember the personalization layer in any model is where the heaviest write IOs go) adds some questions. I personally believe VMDKs/VHDs regardless of what they contain (OS, App or user stuff) need to be on the type of filesystem the hypervisor natively runs VMs from. This, if for nothing else, is because of known performance characteristics and understood scaling of that model.  Going into a VMW environment with 50k + users running VMFS or NFS then introducing high write IO VMDKs on CIFS is not so simple to me.

Using a VSAN, Nexenta, Nutanix, like technology across local storage with SSD/disk mix gives the VMFS/NFS stores resiliency across nodes for failure. I like it.

Of course while super-scale is the end all be all for some, the actual number of 50K+ seat environments as compared to say the number of 500-5000 seat VDI environments important. There are a lot more of that latter than the former.

Also important is how all these 10K+ seat deployments are typically PODS of 2500-5000 VM environments. it isnt like there is some magic storage environment out there that can serve up 50,000+ connections from VMs to active VMDKs all on one system (assuming your vcenter environment didn't explode before you got there :-) ) Thus VSAN, Nexenta, Nutanix, et al fit just fine into this POD/block architecture.

Anyway, I ramble on sometimes and there were lots of topics in this thread. Just got off of a plane so its time for a Friday evening beverage....

Unidesk does apps, the OS and personalization. And I'd put our layering against ANYONE that does that.


Full disclosure I work for CloudVolumes, but did want to clarify a few things. We are focused on application lifecycle management across desktops, servers and multiple operating systems (Windows physical/virtual/VDI/RDSH/Server Apps/Desktop Apps/Linux even. We're not trying to be in the desktop management business, which has plenty of solutions like SCCM, Symantec etc. We work with those solutions and your existing infrastructure to help you solve many problems that we believe to be the biggest pain point, the APPS!. Apps is where I'm willing to bet much of your hard earned budgets are wasted due to inefficiency. You already have sunk cost in desktop management and ingrained processes that will take years to unwind.

When infrastructure improves due to industry innovation and customers are ready to adopt it, we will benefit from it. We offer that flexibility to our customers. From personal and customer experience, trying to replace everything you do and not leveraging what you already have adds complexity.  Most don't have the ability to absorb that much change all at once given so many conflicting priorities. Trying to replace the kitchen sink with one new way of doing something also locks customers into a single approach taking away their flexibility to take advantage of multiple approaches to fit their diverse use cases.

So we're totally focused on the apps and how to make life easier for people there. We don't care if you are persistent, non-persistent, physical, virtual etc. we work across many today. Ultimately it's about designing infrastructures that you can rapidly provision and maintain at low complexity to enable faster service delivery. We don't pretend to offer a solution to every element of that future, but what we do know is that large complex customers are using us today to deliver apps dynamically to improve how they deliver service side by side with what they do today using existing infrastructure in an instant. That's nothing to do with Desktop Management and replacing it with a kitchen sink approach.


Ron - Does Unidesk has plans to introduce layering for physical VMs as well? Organisations typically don't want to use two management systems/products for physical and virtual.

You may argue that the tide is towards virtual now or at least we have been saying for the last few years.. (as year of VDI). But the reality is, hope you agree, you are going to find an X % of physicals in any organisation.


Ron, I totally understand and agree that application compatibly is enhanced when you mount the personalization layer at boot instead of logon. This is exactly why our Citrix PVD team decided on the mount at boot option as well. However, as I have been helping many customer attempt to implement these mount on boot solutions, what we have found is that there really are not very many apps that truly need this. It is much better to have desktops be truly cloud like as opposed to locking the user into a single VM.

I have helped many customers implement VDI for 50K+ users with several customers running well over 30K+ VDI instances concurrently from a single data center. In all of those deployments, the persistent or PVD desktop proved to be the Devil. Invariably, all of those really large VDI accounts kept coming back and asking for us to get them on non-persistent desktops. What these large enterprise customers really want is for the system providing the desktop to be a commodity cloudlike service. They do not want to worry about backing up VM datastores, making vCenter and particular clusters highly available. These are simply desktops. If they need 50K of them, then they want to simply spin up 50K VMs and call it a day. If a particular cluster or vCenter has issues, no big deal you can simply bring it offline and fix it without worrying about restoring anything because it serves only desktops and contains no user data or user apps. Customers do not have to care if any particular infrastructure is up or down because as long as a user can get to any desktop, then they are happy. That is how you make the build out of VDI scalable and cost effective from an O&M standpoint when you start talking tens of thousands of users.  

The apps have proved to be the challenge. Of the many customers that I have helped rollout these large 50K+ user VDI systems, the only ones that have been happy and successful are the one that have fully embraced the non-persistent desktop with a disposable pod architecture. In order to do this, these customers fully committed to our Citrix Application Streaming and now App-V/ThinApp in order to handle applications that could not be put into the base OS. Additionally, it is OK to have an extra gold base image or two for those one-off applications that required those lower level drivers that could not be delivered via App-V. All of my customers that are currently unhappy are the ones trying to deliver large scale VDI to persistent desktops because they did not commit to remediating and virtualizing their apps. People say it is too hard to do; however, that is simply not true. It is also important to remember that VDI is not for everyone. All of my large scale VDI customers still have populations of users that remained on physical desktops for various reasons. These users leverage VDI when traveling or remote, but still use a physical desktop for the majority of their work. What we find over and over is that 80% of the users can easily be migrated to VDI or RDS with just a little effort and there is business value in doing that. It is the last 20% of users that have those difficult apps, crazy graphics, etc… that make VDI difficult. My argument is to stop worrying about those 20% users and leave them where they are at right now or possibly give them a persistent desktop only if necessary. Everyone wants that utopian vision of one solution and one image to rule them all; however, that is a pipe dream when dealing with 100k+ users across a global enterprise. We have better odds of finding unicorns!   We need to find simpler solutions that cover 80% of our users instead of complex solutions rolled out to everyone just so that the final 20% can be handled.

As for CIFS performance for the personalization layer, it actually works really well as long as you have properly designed your CIFS infrastructure and are using SMB 2.1 or later. In fact, if you can use SMB 3.0 (unfortunately you need 2012 or Win8) you can actually get better performance than using NFS as a datastore with VMware. Also, the reality is that 85%+ of all the I/O is still going to come from the base OS disk and not the user personalization layer. The personalization disk should just have those one-off apps that could not be placed into the gold disk or easily delivered as a virtual app. CIFS is the glue that makes this scalable pod architecture for VDI work. CIFS works great and is quite easy to make highly available and redundant within the data center. If all of my user data, profile data, and one-off apps are provided by CIFS, then I can truly attach to any desktop VM and get the same experience. Considering that the desktops are VMs running at 10GB and sitting next to the CIFS repositories also running at 10GB+, performance is of zero concern!

Ron, I am interested in how you mentioned that TODAY, the Unidesk personalization layer is attached to a machine. Does that mean that you are considering a future version that could attach the user personalization layer at logon? That would be great if you did! I see no reason why both technologies couldn’t be used together. Perhaps departmental or other app layers mounted to the VM as a VMDK at boot with a user app layer mounted at logon via CIFS?




+1 for nonpersistent desktops.  There's tools like Appsense and Liquidware to help with the layering of apps.


Interesting article and even more interesting discussion :).  I love when the vendors turn out.  As someone who designed and owns a 26,000 seat 1:1 VDI environment the subject of moving to a provisioned desktop model is near and dear to my heart.   We've spent years trying and failing to move.  It's all well and good to call it paralysis by analysis but the truth is that we calculated out well north of 100 gold disks that would be required to support our user base when you bake in the apps.  And at 3000 apps with over 70 application updates performed weekly sticking them all in 1 disk and showing them based on group membership isn't going to cut it.  FYI as much as I like FSLogix it's not unique, that has been the approach RES has given for years.

Realistically the personalization approach is the best one as long as you mount at the user logon.  The approach that Unidesk, Mirage, and Citrix PVD all use currently is far too limiting in a large scale pooled infra.  I don't want to have to do static pooled machines... I'm right back to a lot of the same management headaches.  The approach used by Liquidware's FlexApp and CloudVolumes 2.0 allows me to span a pool across multiple VCenters and HyperV environments, roam with the users in a Dr scenario, etc;  Citrix promised this functionality with PVDs years ago and we are still waiting.  Unidesk is extremely solid and has the longest use but has some limitations in a large enterprise environment.  I'm not saying FlexApp and CV are perfect, but they are taking what is to me the correct approach.  Render the hardware stack irrelevant.


Layering tools immature still. Unidesk and Moka 5 the most mature. CloudVolumes and FSLogix still very immature from what little I have seen, Layer type approaches may be  a good direction for the future but have not succeeded so far. , I think the real debate is not if it's layers, it's about do they actually work at scale and what problems are they nest suited for.


I just don't think this layers stuff is mature yet, despite years in the making. Unidesk and Moka 5 are the most mature in this space. FSLogix and CloudVolumes from what little I have seen still immature. The real question is, what are layers type solutions good for and how far should they go? Citrix and VMware really have no credible solutions in this space, so I'd be surprised to see investment from them anytime soon as they are all about EMM BS these days.