By most accounts I've read, the desktop virtualization market is not growing at expected rates. As a result, the big players in the space have made a strong shift towards mobile. However, this new revenue—even if growing fast—is a fraction of the more mature desktop virtualization market.
I believe this lack of growth explains Citrix's decision to bring back the XenApp brand (to stimulate mid-market growth), and to acquire Framehawk (to expand use cases by enabling remoting over mobile networks). It also explains VMware's bet on DaaS by picking up Desktone (an attempt to carve out an adjacent DaaS market), as it’s too early to count for Amazon and Microsoft.
Despite all this hullaballoo, these recent moves won't accelerate the desktop virtualization market in a material way in the short-to-medium term. However, I do believe they are all valid strategies to sustain current growth rates.
Why isn't desktop virtualization growing faster?
I’ve written in the past that the desktop virtualization market is stuck because desktop virtualization doesn’t actually solve the big pain points that customers face with their PC infrastructures. Instead the industry has been focusing on fixing the barriers to entry that are symptomatic of desktop virtualization. Incumbents and the ecosystem have made reasonable progress, but for customers this typically means lots of small point products which are too complex for the value they add.
But when it comes to solving customers' key pain points with PCs, the incumbents have made almost no progress, especially progress that would could open the desktop virtualization market to many more customers by focusing on practical solutions that attack the heart of the PC matter.
When you ask this question in the industry, the conversations quickly digresses into a persistent VDI versus non-persistent VDI versus RDSH debate, with smart people making strong arguments on all sides and moving on the best they can. The net result is the same—a market that's not growing as fast as it could. Meanwhile heterogeneous environments that combine physical, datacenter, and cloud continue to become prevalent, thus increasing complexity.
It’s about app management stupid!
If you take a step back and ignore specific solution architectures for a moment, it's clear that the vast majority of the cost is in application lifecycle management. Just think about how much time and resources you sink into managing Windows desktop applications: packaging MSIs, app virtualization, patching, updating, inventorying, managing licensing, testing for conflicts, and managing change.
No matter which desktop architecture you chose—be it physical PC, persistent or non-persistent VDI, RDSH, DaaS, cloud-hosted, or other—the application management overhead remains. I would suggest it’s the single largest component of cost in your PC environment, and slows you down the most—killing agility along the way. The scope of the problem is magnified and becomes more complicated as you implement solution diversity into your infrastructure, because each solution requires something different for its applications.
What’s needed is a universal application solution
To solve for this, what we really need is a way to manage applications in a seamless way that can cover a diverse set of solution architectures across datacenters and between clouds. It should be done in a way that lets you adopt the architecture over time so it’s not a religious battle to change the way you manage overnight.
I’ve been thinking about this problem for a while and I believe there are a specific set of problems that need to be solved to get there. The persistent vs. non-persistent vs. RDSH debate doesn’t matter. That’s an architectural choice that often reflects management maturity and the use case at hand.
I discussed these problems recently with Matt Conover, CTO at CloudVolumes who I wrote about last year. I view CloudVolumes' technical architecture as a hybrid between layering and application virtualization that enables them to have high application compatibility while working with your existing infrastructure. A quick recap of how they do this from my previous post:
"[CloudVolumes] achieves this by installing applications natively into storage and then capturing them as VMDK/VHD stacks outside of the OS, which can then be distributed. You may think this is just like application packaging with App-V or ThinApp but it’s not quite that. They natively store the bits as they are written during the install, in a different location, and then take note of things like services which are started and roles which are enabled into the OS. These are then 'put' onto the AppStack volume, and when complete (which can span reboots, and several apps or dependencies being installed one after the other) you tell the agent through a dialog in the provisioning VM you are done, and that VMDK/VHD is then locked as a read-only volume which can now be assigned to others.
When this read-only volume is attached to a server or desktop VM running their agent, its contents are immediately virtualized into the running OS, registry, files etc. Unlike ThinApp or App-V, it’s immediately available and seen by other applications on the system as if it was natively resident (no need to stream)—without having to do any special registry changes to see the contents of the opaque object/package within ThinApp/App-V."
Matt and I had a great conversation and I challenged him to illustrate how the problems I see could be solved using his architecture.
I see five key buckets of problems that prevent a universal application solution being built:
- Problem 1: Delivering frequently updated apps into a base image, including plug-ins and patches. Also delivering service packs to those applications.
- Problem 2: Managing complex applications across diverse architectures (physical, VDI, RDSH/XenApp, ThinApp, App-V). This helps to avoid architecture lock-in, but requires the solution to have very high application compatibility.
- Problem 3: Managing applications across multiple datacenters and multiple clouds (including DaaS). Again this avoids lock-in.
- Problem 4: The solution must work with existing infrastructure.
- Problem 5: The solution must be simple to manage and reduce console clutter.
I find it easier to unpack these problems if I can apply them to pain points I've experienced so far or that I visualize for the future. To do that, Matt was kind enough to produce five short videos to demonstrate the use cases I suggested.
1a. For Office 2013 in base build deliver an Excel 2013 plug-in
Most people I know install Office in the base image as a best practice. However, they have to constantly deal with installing various plug-ins that can cause lots of churn testing and packaging. They don’t want to bother with the overhead of doing this with application virtualization technologies since they have to handle app interoperability and app compatibility issues. In the below video, a Power Query Excel Plug-in is delivered dynamically into a running session. The plug-in could just as easily be removed.
1b. Apply Service Pack 2 to Office 2010 in base build real time
This use case is self-explanatory. A service pack update usually means a painful and risky upgrade that requires lots of testing and managed change, which is usually sloooooow.
2. Patch a running operating system with a PatchStack.
This one is pretty cool. A Patch Tuesday-type payload is applied dynamically to a running OS. A lot of people phase in risky changes, which means they are not agile. I like this use case a lot for non-kernel patches and the previous ones as a way to quickly test, UAT, and deploy changes rapidly with a safe rollback mechanism. Certainly these seem to address Problems 1 and 4, although I’d love to see console integration work with incumbent solutions in the future to address Problem 5 in a better way.
3. Deliver applications to RDSH
This got me pretty excited as I never believed using this type of approach one could reliably deliver complex apps to a thin provisioned multi-user environment. This is something I remember speaking to the Microsoft RDS team about in Redmond years ago. I love it that you now can dynamically deliver apps into RDSH and take advantage of multi-user kernel goodness with high application compatibility. I call it multi-user layers. In fact, if you extended this to XenApp, all of a sudden you can start to thin provision your farms and sites and consolidate silos of application servers.
4. Delivering multiple apps to multiple users on RDSH
Multi-user layers enable a single app to be shared by multiple apps. What about different apps to different users? If you can do that also, it’s a killer capability that could be leveraged by customers and service providers alike. This solves a very important area within the Problem 2 bucket.
5. Run applications across multi-datacenters including Amazon
DaaS may be great, but it’s the apps that matter. Microsoft with Mohoro and Amazon with AWS Workspaces both use RDS. Delivering applications to these environments as well as VDI style DaaS is going to be key. The previous multi-user layer demo certainly shows this is feasible in a new way. But that’s not the entire picture. What about moving apps from your local desktop OS to a datacenter/cloud elsewhere on Windows Server or RDSH? Can applications using this style of architecture be moved from a Desktop OS to a Server OS dynamically? The following video shows this and shows a path forward, i.e. apps can be managed across datacenters. I see lots of DaaS enablement potential here. In fact, this could be a cunning way for DaaS providers to reduce their costs to deliver app diversity to their customers. In the enterprise, I see no reason why you couldn’t leverage DFS to enable app availability across multiple datacenters. This goes a long way to address Problem 3 and addresses Problem 2 more holistically.
It’s important to understand the secret sauce
When I step back and think about why this can be achieved, it’s very easy to bucket a group of architectures and not appreciate some fundamental differences. When I asked Matt if he considers his technology layering, he promptly replied that’s one for the marketing department, but insisted on saying the approach is virtualization above the OS. This confused me and I asked whether he meant something like application virtualization? After a little back-and-forth here’s what became clear to me.
CloudVolumes doesn't need full VM control to do what they do. This means they can dynamically attach apps without recomposing or reboots. Since they work above the OS, they can work across multiple operating systems. Because they take advantage of VMDKs or VHDs for physical environments, they can thin provision images and don’t have to use techniques like "application cloaking." I refer to application cloaking as installing apps in an image and then masking who sees what via policy. By virtue of CloudVolumes' approach, a lot more file system and registry compatibility is possible. Additionally, deep isolation is not attempted like traditional application virtualization containers (App-V, ThinApp, etc.), so compatibility goes up drastically. In fact, CloudVolumes' application virtualization is a complementary technology as evidenced by a recent white paper with VMware ThinApp. I see no reason why this couldn’t also be extended to App-V.
So it’s important to gain a deeper appreciation of how this stuff works. I don’t really care what it’s called—suggestions anybody? What’s most important is appreciating which approach is going to enable you to solve for the broadest set of problems for your use cases.
Customers want aggregate reduction in complexity of managing apps.
Lets face it; Citrix and VMware are not competing with each other in this space, and they're not competing with Azure or AWS running on Windows Server. The biggest competitor is the status quo enterprise market. The seat of pain for these customers is the applications. If new approaches help address core customer pain points, then the world has an incentive to shift their approach sooner.
Little has been done to address this. Why?
Not only can the market be grown, it can be expanded to the server and cloud side of the house. In fact, I’ve seen some Linux app examples with this approach and have seen core Windows infrastructure examples such as SQL Server in Microsoft's Windows Server product. The salient point in all of this is that it’s about app management stupid!