Persistent vs. non-persistent versus RDSH doesn't matter. CloudVolumes makes it all about the apps!

By most accounts I've read, the desktop virtualization market is not growing at expected rates. As a result, the big players in the space have made a strong shift towards mobile.

By most accounts I've read, the desktop virtualization market is not growing at expected rates. As a result, the big players in the space have made a strong shift towards mobile. However, this new revenue—even if growing fast—is a fraction of the more mature desktop virtualization market.

I believe this lack of growth explains Citrix's decision to bring back the XenApp brand (to stimulate mid-market growth), and to acquire Framehawk (to expand use cases by enabling remoting over mobile networks). It also explains VMware's bet on DaaS by picking up Desktone (an attempt to carve out an adjacent DaaS market), as it’s too early to count for Amazon and Microsoft.

Despite all this hullaballoo, these recent moves won't accelerate the desktop virtualization market in a material way in the short-to-medium term. However, I do believe they are all valid strategies to sustain current growth rates.

Why isn't desktop virtualization growing faster?

I’ve written in the past that the desktop virtualization market is stuck because desktop virtualization doesn’t actually solve the big pain points that customers face with their PC infrastructures. Instead the industry has been focusing on fixing the barriers to entry that are symptomatic of desktop virtualization. Incumbents and the ecosystem have made reasonable progress, but for customers this typically means lots of small point products which are too complex for the value they add.

But when it comes to solving customers' key pain points with PCs, the incumbents have made almost no progress, especially progress that would could open the desktop virtualization market to many more customers by focusing on practical solutions that attack the heart of the PC matter.

When you ask this question in the industry, the conversations quickly digresses into a persistent VDI versus non-persistent VDI versus RDSH debate, with smart people making strong arguments on all sides and moving on the best they can. The net result is the same—a market that's not growing as fast as it could. Meanwhile heterogeneous environments that combine physical, datacenter, and cloud continue to become prevalent, thus increasing complexity.

It’s about app management stupid!

If you take a step back and ignore specific solution architectures for a moment, it's clear that the vast majority of the cost is in application lifecycle management. Just think about how much time and resources you sink into managing Windows desktop applications: packaging MSIs, app virtualization, patching, updating, inventorying, managing licensing, testing for conflicts, and managing change.

No matter which desktop architecture you chose—be it physical PC, persistent or non-persistent VDI, RDSH, DaaS, cloud-hosted, or other—the application management overhead remains. I would suggest it’s the single largest component of cost in your PC environment, and slows you down the most—killing agility along the way. The scope of the problem is magnified and becomes more complicated as you implement solution diversity into your infrastructure, because each solution requires something different for its applications.

What’s needed is a universal application solution

To solve for this, what we really need is a way to manage applications in a seamless way that can cover a diverse set of solution architectures across datacenters and between clouds. It should be done in a way that lets you adopt the architecture over time so it’s not a religious battle to change the way you manage overnight.

I’ve been thinking about this problem for a while and I believe there are a specific set of problems that need to be solved to get there. The persistent vs. non-persistent vs. RDSH debate doesn’t matter. That’s an architectural choice that often reflects management maturity and the use case at hand.

I discussed these problems recently with Matt Conover, CTO at CloudVolumes who I wrote about last year. I view CloudVolumes' technical architecture as a hybrid between layering and application virtualization that enables them to have high application compatibility while working with your existing infrastructure. A quick recap of how they do this from my previous post:

"[CloudVolumes] achieves this by installing applications natively into storage and then capturing them as VMDK/VHD stacks outside of the OS, which can then be distributed. You may think this is just like application packaging with App-V or ThinApp but it’s not quite that. They natively store the bits as they are written during the install, in a different location, and then take note of things like services which are started and roles which are enabled into the OS. These are then 'put' onto the AppStack volume, and when complete (which can span reboots, and several apps or dependencies being installed one after the other) you tell the agent through a dialog in the provisioning VM you are done, and that VMDK/VHD is then locked as a read-only volume which can now be assigned to others.

When this read-only volume is attached to a server or desktop VM running their agent, its contents are immediately virtualized into the running OS, registry, files etc. Unlike ThinApp or App-V, it’s immediately available and seen by other applications on the system as if it was natively resident (no need to stream)—without having to do any special registry changes to see the contents of the opaque object/package within ThinApp/App-V."

Matt and I had a great conversation and I challenged him to illustrate how the problems I see could be solved using his architecture.

I see five key buckets of problems that prevent a universal application solution being built:

  • Problem 1: Delivering frequently updated apps into a base image, including plug-ins and patches. Also delivering service packs to those applications.
  • Problem 2: Managing complex applications across diverse architectures (physical, VDI, RDSH/XenApp, ThinApp, App-V). This helps to avoid architecture lock-in, but requires the solution to have very high application compatibility.
  • Problem 3: Managing applications across multiple datacenters and multiple clouds (including DaaS). Again this avoids lock-in.
  • Problem 4: The solution must work with existing infrastructure.
  • Problem 5: The solution must be simple to manage and reduce console clutter.

I find it easier to unpack these problems if I can apply them to pain points I've experienced so far or that I visualize for the future. To do that, Matt was kind enough to produce five short videos to demonstrate the use cases I suggested.

1a. For Office 2013 in base build deliver an Excel 2013 plug-in

Most people I know install Office in the base image as a best practice. However, they have to constantly deal with installing various plug-ins that can cause lots of churn testing and packaging. They don’t want to bother with the overhead of doing this with application virtualization technologies since they have to handle app interoperability and app compatibility issues. In the below video, a Power Query Excel Plug-in is delivered dynamically into a running session. The plug-in could just as easily be removed.

 

1b. Apply Service Pack 2 to Office 2010 in base build real time

This use case is self-explanatory. A service pack update usually means a painful and risky upgrade that requires lots of testing and managed change, which is usually sloooooow.

 

2. Patch a running operating system with a PatchStack.

This one is pretty cool. A Patch Tuesday-type payload is applied dynamically to a running OS. A lot of people phase in risky changes, which means they are not agile. I like this use case a lot for non-kernel patches and the previous ones as a way to quickly test, UAT, and deploy changes rapidly with a safe rollback mechanism. Certainly these seem to address Problems 1 and 4, although I’d love to see console integration work with incumbent solutions in the future to address Problem 5 in a better way.

 

3. Deliver applications to RDSH

This got me pretty excited as I never believed using this type of approach one could reliably deliver complex apps to a thin provisioned multi-user environment. This is something I remember speaking to the Microsoft RDS team about in Redmond years ago. I love it that you now can dynamically deliver apps into RDSH and take advantage of multi-user kernel goodness with high application compatibility. I call it multi-user layers. In fact, if you extended this to XenApp, all of a sudden you can start to thin provision your farms and sites and consolidate silos of application servers.

 

4. Delivering multiple apps to multiple users on RDSH

Multi-user layers enable a single app to be shared by multiple apps. What about different apps to different users? If you can do that also, it’s a killer capability that could be leveraged by customers and service providers alike. This solves a very important area within the Problem 2 bucket.

 

5. Run applications across multi-datacenters including Amazon

DaaS may be great, but it’s the apps that matter. Microsoft with Mohoro and Amazon with AWS Workspaces both use RDS. Delivering applications to these environments as well as VDI style DaaS is going to be key. The previous multi-user layer demo certainly shows this is feasible in a new way. But that’s not the entire picture. What about moving apps from your local desktop OS to a datacenter/cloud elsewhere on Windows Server or RDSH? Can applications using this style of architecture be moved from a Desktop OS to a Server OS dynamically? The following video shows this and shows a path forward, i.e. apps can be managed across datacenters. I see lots of DaaS enablement potential here. In fact, this could be a cunning way for DaaS providers to reduce their costs to deliver app diversity to their customers. In the enterprise, I see no reason why you couldn’t leverage DFS to enable app availability across multiple datacenters. This goes a long way to address Problem 3 and addresses Problem 2 more holistically.

 

It’s important to understand the secret sauce

When I step back and think about why this can be achieved, it’s very easy to bucket a group of architectures and not appreciate some fundamental differences. When I asked Matt if he considers his technology layering, he promptly replied that’s one for the marketing department, but insisted on saying the approach is virtualization above the OS. This confused me and I asked whether he meant something like application virtualization? After a little back-and-forth here’s what became clear to me.

CloudVolumes doesn't need full VM control to do what they do. This means they can dynamically attach apps without recomposing or reboots. Since they work above the OS, they can work across multiple operating systems. Because they take advantage of VMDKs or VHDs for physical environments, they can thin provision images and don’t have to use techniques like "application cloaking." I refer to application cloaking as installing apps in an image and then masking who sees what via policy. By virtue of CloudVolumes' approach, a lot more file system and registry compatibility is possible. Additionally, deep isolation is not attempted like traditional application virtualization containers (App-V, ThinApp, etc.), so compatibility goes up drastically. In fact, CloudVolumes' application virtualization is a complementary technology as evidenced by a recent white paper with VMware ThinApp. I see no reason why this couldn’t also be extended to App-V.

So it’s important to gain a deeper appreciation of how this stuff works. I don’t really care what it’s called—suggestions anybody? What’s most important is appreciating which approach is going to enable you to solve for the broadest set of problems for your use cases.

Customers want aggregate reduction in complexity of managing apps.

Lets face it; Citrix and VMware are not competing with each other in this space, and they're not competing with Azure or AWS running on Windows Server. The biggest competitor is the status quo enterprise market. The seat of pain for these customers is the applications. If new approaches help address core customer pain points, then the world has an incentive to shift their approach sooner.

Little has been done to address this. Why?

Not only can the market be grown, it can be expanded to the server and cloud side of the house. In fact, I’ve seen some Linux app examples with this approach and have seen core Windows infrastructure examples such as SQL Server in Microsoft's Windows Server product. The salient point in all of this is that it’s about app management stupid!

Join the conversation

12 comments

Send me notifications when other members comment.

Please create a username to comment.

I'm a massive fan of CloudVolumes and would love to have some hands on fun with it at some point.


I completely agree with the points of this post and we need ways to aggregate delivery regardless of the underlying architecture/form factor.


This is why we use Numecent's Application Jukebox for our applications. We're using it to deliver to,


- Physical


- VDI


- RDSh


- Non-managed/BYOD/COPE (software license permitting of course :P )


Key point is that it's perfect for a hybrid world where we have a mixture of virtual/physical and non-managed devices.


VMware should acquire Numecent and by done with ThinApp IMO.


My two pennies.


Cancel

Great to see Harry back in his comfort zone.


However, IMHO, it's about the managing of everything, not just the app. This is where the architecture and implementation fall apart.


Each company should put a quantifiable expense attached to each item:


1) moving pieces and parts (SAN, Datacenter)


2) HA / DR


3) OS and App licensing


4) Patch Management


5) BODIES and resources


6) Impact on other business priorities not getting done


Is the overall experience really solving the original goals?


I am seeing a "< 50% success rate" in large companies with around 30% utilization. This number aligns with the overall adoptions numbers that Harry has referenced.


In a recent CEO meeting, I heard one executive state "IT is sucking the dollars out of the company and we can't do anything about it". This was in context to their attempt at VDI and providing a robust remotely provisioned HA environment.


In my small mind, I understood this to mean: “Who has their eye on the ball?"


However - This leadership group also understood the value of enabling information access across mobile devices (phones and tablets) while still being able to safe-guard corporate data.


Perhaps MDM is a great transitional step while the other kinks in VDI and application management are being worked out?


Cancel

just to be sure, in version 1 cloudvolumes isn't the be and end all, you need to test for your environment.  


I tested with only about 5 applications and at least one of them wouldn't work if you wanted to share these applications between different systems, they had to be placed on writeable (unique) volumes.


I dream that version 2 may resolve these edge cases (though, how edge this is when 20% of my test didn't work).  I seem to recall there were a few applications that had to be installed in a writeable volume


Cancel

Hi @Mark,


I believe the application you had an issue with was CutePDF and that the version of CloudVolumes you were testing was many releases ago, likely our 1.0 version (our current shipping version is 1.7, with 2.0 coming out shortly).


We have since resolved the issue you had with having to put CutePDF into a user writable volume. It can now be shared using an AppStack with thousands of users.


The fix was actually an installation procedural change for the specific application and not any code change on our side, so actually your 1.0 installation would work today with that new procedure.  


Please note that we have found very few if any applications which need to be put in the user writable volume to function. CutePDF as a work-around was put in the writable volume in our 1.0 release, but since then I am not aware of any other app needing this kind of treatment.


Cheers,


shaun


co-founder vp products cloudvolumes, inc.  


Cancel

@SillyRabbit I agree with you that there is a bigger picture and this is a part of it. However I also think it’s all too easy to get overwhelmed, which I call the forest/tree conundrum. I.E. the old adage, that some people can only see the trees, while others only sees the forest. The reality is that you have to be somewhere in the middle. The forest people need to be able to look down and identify the big trees and prioritize a plan.


So in that respect I see two big trees.


- Apps: I see as a big bucket of inertia and cost broadly across all desktop architectures. So attacking that heads on vs. trying to boil the ocean by enabling a fully non-persistent desktop, I believe has the highest probability of success for the investment. I truly believe apps are the key to unlocking growth in the desktop virtualization market. Up until now, the industry has been focused on removing barriers to adoption. This enablement of market focus has to now shift to expansion focus; else there are big problems down the road.


- Provisioning and enabling a stateless data center: This is another hairball not strictly only related to applications. Provisioning of VMs with workloads Windows/Linux usually means an involved and complex process that takes time and leads to VM sprawl, as most people have a VM/Image for every user case. So instead if you consolidate base images, and then change the model of the data center to dynamically deliver workloads (Windows Server ones or Linux) you get lots of automation for your provisioning processes. All this cloud stuff at an IT tree level begins with process optimization such as this.


So for example say you wanted to deliver and configure N unique Linux servers in the datacenter. Today, that would be involved and time consuming for most. If the model were instead, spin up a generic managed VM on your standard infrastructure, dynamically provision workload in a few seconds, and then use something like puppet to configure. You have a much better repeatable process. You can rinse and repeat that same method for Windows servers minus the configuration parts which Windows apps themselves tend to do a much better job off. This is what I was thinking when I wrote the conclusion in my post and mentioned the expansion to server and cloud use cases. However due to the length of the post, and that most people who read this blog I doubt care much about what I just said, I didn’t bother flushing out in more detail. By virtue of picking off these types of big tree problems, I think that’s how IT gets better at sucking the dollars out of the business. In reality though, I think that is the wrong mindset from the business. The better question the CEO should be asking is, what is the ratio of IT operational expense to innovation expense as I increase my strategic investment in the technology stack to enable competitive advantage.  


Cancel

@Harry – I have gone back and re-read the posting numerous times in order to drive myself Silly. Rather than turning this into a high-level conversation about bigger-picture politics, paradoxes, and execution, I want to stay on topic and commend the folks at CloudVolumes.


I have also tried to take in as much as possible around their Server, Cloud, and User solutions.


OK, there’s some simplification going on ;-).


How would the folks here compare CV to CTXS’ PvS solution.


Silly Regards!


Cancel

@SillyRabbit,


We actually work quite well with Citrix PVS (Provisioning Services) and have many large customers who use PVS today to deploy the XenApp server, which has no apps in it, and then uses us to dynamically deploy/publish the applications onto those running servers.  Remember CloudVolumes AppStacks can contain many hundreds of apps and can execute scripts when they land on the target system (or when they are taken away), so instant publishing to specific users or groups on the farm can occur with simple powershell commands.


This saves the customer from having to put any apps in the XenApp PVS image, reducing the complexity of updating apps (similar problem to XenDesktop/VDI base image updates, putting apps in the base image is bad news…, even if large numbers of users use them they are still difficult to update, requiring a recompose operation every time etc. etc.) – Using CloudVolumes with PVS also has the added benefit of  significantly reducing the PVS image size as it is streamed over the network  ---


Or you could also simply avoid streaming altogether and just use CloudVolumes to deploy the whole shebang including the XenApp server itself…  think non-persistent server pool, generic Win2k8/2k12 waiting for their workloads…and voila! it becomes a XenApp server, and then land the apps a second later. http://vimeo.com/68661710 (sorry no audio) and here comes those apps


www.youtube.com/watch


shaun


Cancel

@Shaun, sorry about that, I figured you may have resolved it, unfortunately I was never able to test it again due to the fact that I was told i'd have to pay to eval, and why would I when it had previously failed, and have never been told that it can be fixed or has been.


I seem to recall a list of certain apps (but as you said, this was a long time ago and just on release)


Hope it works well, maybe i'd revisit under more favourable terms once version 2.0 has been released


Cheers


Cancel

How do they handle apps that require reboots to complete installs?


What about ordering of apps? All layers type solutions suffer from this. I know Unidesk has got better at this over the years. I guess confidence grows with time and more apps being supported this way, just like the early days of app-virt.


How much does it cost?


What about conflicts with other agents, virus, PVS etc?


How about management at scale in many data centers?


Would be great to see it on XenApp. Overall cool stuff, but would need to get more hands on to gain confidence. I worry about the overall experience with these types of things.


Cancel

Pretty cool stuff.. You project yourself to be different from other aolutions on the market...


How is this different from Vmware Mirage? If I am a VMware shop, Why would I want to invest in another product (leads to separate billing, support, management etc)


Cancel

Hi Daniel


I would be interested to know your experience in using  Numecent Juke box. I have been following them for the last 2 years but not unsure about their success.


If you can please share your experience.


Matheen


Cancel

@appdetective,


Response to your questions:


1) Apps which require reboots during install: This is a non-issue. During the provisioning process you install the app(s) (you can install hundreds at a time if you want), these installs can span reboots, multiple reboots, loading of dependencies (.net?), etc. until you say your done the AppStack is intelligently capturing your installation.


2) How much does it cost: This depends on your use case and volume (number of seats). You can give us a call and we can discuss your specific environment and use.


3) What about conflicts with other agents, AV, PVS: This depends on what agents you are referring to, to be able to answer definitively but for example the CloudVolumes agent can peacefully co-exist with Citrix PVD installed in the VM, where PVD is used for UIA and we are used for dept. installed apps -- AV should be in the base image, general best practice – PVS; We work with PVS today no issues. We have several customers using us with PVS deployed XenApp servers where we are used to dynamically deploy the apps – think non-persistent XenApp server w/ no apps installed in the base.


4) How does it scale in many datacenters: Depends on what you mean. If the private datacenters can see each other, e.g. direct line of site to one of our CloudVolumes Managers and to the storage (shared storage or DFS?) , and we can interact with the hypervisors this would work out of the box whereby one of our managers could deploy and manage apps/workloads across your private datacenters. Public/private cloud example deployment of an app to a local datacenter and Amazon: http://vimeo.com/88141748


Note a single one of our managers can easily handle many hundreds of thousands of requests for AppStack/UIA attachment – it’s a web service which you can stick a load balancer in front of or do DNS round-robin and your set for a very, very large environment -- we aren't inline in the data path either. Think VMDK/VHD broker (was actually designed in a similar way to how VDI brokers work).  


5) XenApp: Heres a RDSH demo – in a XenApp environment we can also auto-publish the app using powershell scripts when the AppStack is attached making it completely dynamic. http://vimeo.com/88139685


shaun


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close