VDI disk images: Shared versus 1-to-1. Which requires more resources in today's world?

Personal disk images versus shared images--which is better for VDI? This is one of the biggest decisions that companies need to think about when designing VDI solutions.

Personal disk images versus shared images--which is better for VDI? This is one of the biggest decisions that companies need to think about when designing VDI solutions.

Each has pros and cons. Conventional wisdom is that:

Shared / non-persistent / 1-to-many images are easier to manage (since you only have to update a single master) and you can handle applications with app virtualization (App-V, ThinApp, InstallFree, Endeavors, Symantec) and the user settings with user virtualization (AppSense, RES, Scense, triCerat, Liquidware, etc.). Shared images is also where we think about "layering" (Unidesk, Wanova, MokaFive, RingCube, Atlantis)

Personal / persistent / 1-to-1 images, on the other hand, are "easier" because you just keep in doing your desktops the old way. But the downside is that they're harder to manage (you need something like Microsoft SCCM or Symantec Altiris Client Management Suite or BigFix to handle the patching and updating). Also, personal images take up more disk space which translates to more IOPS and a higher overall storage bill (because you literally have a separate VMDK or VHD for every user).

So that's conventional wisdom when it comes to VDI storage: Shared is lighter on the resources (but maybe not as personal), while 1-to-1 is great for personalization and easier to implement, but it comes at the cost of higher resource requirements.

Is this still true in 2011?

Consider the following:

(1) The "expense" in storage today is IOPS, not raw capacity. So yeah, maybe 1000 users would need 20TB of storage with personal images where you'd be able to get away with only 1TB if you were using shared images, but your 1000 users would need 50k IOPS regardless of whether they were using shared or not. And conventional sharing mechanisms (linked clones, thin provisioning, etc.) are actually WORSE for IOPS because every user accesses the same physical base image. This means that you can't "automagically" just share a single master without getting some advanced storage involved (like caching, acceleration, tiered storage, SSD, or a combination of all). And if you have to do that, then why are you messing around with all this layering in the first place?

(2) This brings up the second issue which is that shared images and layering are complex. I've written previously about something I'm calling "Madden's Paradox" which states that if your desktop environment is simple enough that today's layering technologies work, then you can probably just use Remote Desktop Session Host (Terminal Server). The consequence of this is that most of today's VDI environments actually do NOT use shared images (which is kind of backwards to what we all thought it would be). Now to be fair, layering is evolving, but so are storage technologies.

(3) Desktop-specific storage technologies are getting really awesome now. It's taken a few years for storage vendors to get their products tuned for desktop workloads. (Remember most of them came from the server world where we really cared about capacity and read optimizations.) But now we're seeing stuff from DataCore, Xiotech, NetApp, Fusion-io, Virsto, Atlantis, and about a hundred other companies who are doing everything they can to provide high-speed, high-IOPS, cost-effective storage (things like block-level single-instance storage, block-sharing across LUNs, block-level tiered storage and caching, etc.).

(4) And the final irony of this all is that even with personal disk images, most companies still store the most personalized stuff (user data, home folders, profile) outside of the user's desktop VM. So that means that all these desktop disk images -- even fully personal 1-to-1 images -- are almost completely identical anyway! (I mean they have the same OS, same patches, same basic apps...) So even when different, they're the same, which means our advanced storage technologies (which we have to have regardless) can kick in and let them scream.

To be clear, I am not trying to suggest that personal images are better than shared images. Of course this is something you need to evaluate on a case-by-case basis. What I am saying is that while the conventional wisdom has been that shared images consume fewer resources than personal images, I don't think that's true in 2011 anymore.

Join the conversation

13 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What about Citrix PVS?  This wasn't mentioned and does resolve much of the IOPS issue for shared images.


Cancel

@Brian,


Any specific reason:


For App Virtualization, Citrix Streaming


For User Virtualization: Citrix UPM


For Shared Disk: Citrix PVS


are ignored in the above list given?


Cancel

I didn't mention PVS because I didn't get into any products that enable just sharing. I talked about layering products, and I talked about products which accelerate personal disks. but PVS is none of those. Maybe it needs its own section? :) Though really I don't think it's relevant here.


Then for user virt, I didn't mention UPM since it's a feature, not a product, and I didn't mention XenApp streaming because Citrix is now "leading with Microsoft" and pushing App-V.


Cancel

#1 and #3 seem to deal with disk related issues, #2 deals with the usefulness of shared images over shared desktops, but I don't see anything dealing with the issue of mgmt complexity with 1-to-1 images.  You allude to it in your top portion but don't address it at the bottom.


Isn't one of the benefits of VDI single image mgmt?  If you go 1-to-1 and take away single image mgmt, move all user data and OS's to expensive SAN storage and pretty much do everything like you do with physical desktops then what's the point?  Is the benefit of VDI then just reduced to being able to access your PC from anywhere?  


If this is the direction we're going, maybe XenDesktop and other VDI products will be reduced to an agent that runs on physical PC's that you could then view from anywhere.  You can already use the Citrix VDA agent in this way with XenDesktop...


Cancel

Definitely got me thinking a bit here, Brian, but I think truly getting to the bottom of this requires a deeper dive into the operational costs of each approach. I'd wager that if you added all of the costs up, the operational management costs of desktop management will outweigh the hardware side by a decent margin.


In my view, the drive to get to standard images with application- or user-specific layers is about much more than hardware savings. The real promise it holds is to arrive at some type of hybrid between a locked down desktop (cheap but rare) and one where the user has full admin rights (expensive and common).


While there is some new complexity in adding user personalization layers to a standard image, I would argue that it is less complex than managing a population with a high percentage of admin users. Also, the complexity in layering approaches tends to be concentrated at the initial implementation, while the complexity and pain of widespread admin rights only grows over time.


Also, once you move to a "componentized" desktop, it will make it much easier to shift course as underlying storage capabilities and economics change. If you have decoupled the user from the delivery method, you can use terminal services wherever it makes sense and flip users (individually or en masse) to VDI as business needs demand and/or storage optimization puts the economics within reach for you.


Doug Lane


AppSense


Cancel

Having 1 to 1 Images is really the best way you can deliver the VDI today, but the problem is what will happen when we move from Win XP to Win 7, and subsequently to Win 8 or so on, you will have to keep upgrading your workstations, Users will have to again install the applications and this becomes a Big migration project, so to avoid it what do you do.


1. Use Application Virtualization ( Let disks are still persistent).


2. Use something like Appsense/ Res ( Let the disk still be persistent)


3. Now if you have been able to get first two thing right, this is where the whole shared disk concept fits in with IT Manager, this is where pooled approach start and this is when Shared disk starts to make more sense, If first two things are not quite right in your Enterprise share disk can become more of a an issue rather than making things easier for you.


Our customers use persistent disk for VDI which is XP based use netapp  Dedup to draw storage cost savings, but still when it came to Migration to Win 7 it was still a big project, because users had admin access and their need for more storage would never end as, users can install any software from the inventory and continue to have it even when he need it just once a year.


So I think without App virtualization and User Virtualization...  Shared Images are difficult, with these two it’s much more easier and practical to do it.


1. Use Application Virtualization ( Let disks are still persistent).


2. Use something like Appsense/Res ( Let the disk still be persistent)


3. Now if you have been able to get first two thing right, this is where the whole shared disk concept fits in with IT Manager, this is where pooled approach start and this is when Shared disk starts to make more sense, If first two things are not quite right in your Enterprise share disk can become more of a an issue rather than making things easier for you.


Our customers use persistent disk for VDI which is XP based use netapp Dedup to draw storage cost savings, but still when it came to Migration to Win 7 it was still a big project, because users had admin access and their need for more storage would never end as, users can install any software from the inventory and continue to have it even when he need it just once a year.


So I think without App virtualization and User Virtualization.. shared Images are difficult.


Cancel

I usually try not to jump in on too many conversations that hit close to home, but this one… I just had to.


DISCLAIMER. I work for Unidesk, one of the Layering vendors lists. So If I was a financial analyst I would have to disclose that I am long Layering.


I think there is an in between, we at Unidesk do it and Brian has mentioned some of the other layering-type technologies that are trying.


I think you need both. Yes there are use cases for Kiosks/labs, but centralization of desktops or desktop replacement for people that don’t want terminal services experiences, need a 1:1 desktop.


With that said you have to also shrink the disk footprint enough to take advantage of some of the disk technologies today (like SSD). Buying all super-fast disk (SSD/FUsionIO) is cost prohibitive, for sure when really only part of the data in VDI is HOT and read a lot from the disk. Then you also have to be able to handle the write with some of that disk technology and that depends on your hardware vendor supplying a way to handle large numbers of random writes.


With that said lets hit the topical points one at a time just to give you my take:


1) Yes the expense today is IO not raw capacity if you assume rotating disk only. But pretty much every major vendor and all the startups are doing SOMETHING with SSD or something like it. In that space the problem is reversed. With SSD, IO is plentiful but capacity is limited. Like anything else (memory I dare say) price will continue to come down and capacity and quality will go up.


But at the same time we have to do SOMETHING today. Which means leveraging SSD or something like it to get the IO, while still using rotating disk for transient info and cold data (page file, boot files, things not hit hard by IO or hit less often).


On the more affordable side there are products like the EQL PS 6000 XVS I have been testing recently that mixes SSD and SAS to allow “hot” data pages to move to SSD and cold data to sit on rotating. On the other end of the spectrum is something like FAST setups from EMC. Doing something similar on a larger scale.


Of course the 6000 XVS has about 2.4 TB writable and a 1:1 thick provisioned is about 90 or so VMs. My testing was that shared layers for desktops (Unidesk desktops) would give us about 250 1:1 desktops. So you need both technologies to really put a dent in the storage cost.


My take for the last 2 years? IO is a hardware limitation and it will be solved by hardware.  BUT we can do intelligent things in software to help users out some. So on the Unidesk side (and others) we use shared layers (Shares apps and os layers) to reduce the footprint to leverage SSD more economically. Also sharing layers is a good thing for SSD since the shared data is stored in a very small space vs duplicated over and over in unique VMDKs.


2)- Can layering be complex? Sure. And often when you hear layering the term is used in different ways (I had a VMW guy tell me they had profile stuff now to cover the persona layer, he then pointed out the profile/persistent disk in View). In my case I will refer to Layers as file system and registry layers that can be combined to present a single C: drive to a desktop.


Is real file system and registry layering tough? Well, it’s tough to write code for, that’s for sure! But for the most part it works. Will it work with every app you have in your environment? Often, yes.  Are there some issues with specific applications? Sure, just like any technology. But I will say a layered file system approach has fewer NATIVE issues with applications than App-Virt does since the apps files and registry entries are present at boot of the machine and file system layering doesn’t “bubble” the app at run-time.


3)  there are some great storage technologies out there. I have had fun with a lot of them recently. The fastest I have seen yet is the FusionIO stuff. Just wish I could figure a better way to leverage it than I am currently.


4) here is where it gets interesting. You are right most people will still do folder redir for stuff like My Documents. But the cool thing is that a 1:1 image JUST WORKS the way a desktop is supposed to. No worrying about the profile loading or unloading, did the admin set it up to save this part of my configs, if I am allowed to install web plugins they don’t get wiped out when the admin refreshes the centralized image.


But of course I think the 1:1 desktop in the Unidesk world is the blending of the two technologies (shared and 1:1). The ability to have a 1:1 desktop, but still have shared images/layers (back to layering again). Folder re-dir will get some data off the system but the FEEL and experience of the desktop is huge to the end user. I will say most of our customers have come to us because they piloted, had problems with profiles or stuff not looking right, apps not working, web plug-ins disappearing,  and didn’t want people to have a Terminal Server experience, BUT they need to storage reduction of shared layers.  


As I said, I’m jaded.


Cancel

Wow. Sorry for the long post. You get typing in this LITTLE BITTY BRIAN WINDOW and you dont realize you have brain puked all over the site.


Hey Brian, how about a preview so I can realize when I am pontificating endlessly!?


Cancel

I was told there would be no math ....


@ron - as always - a well articulated point and counterpoint as to how this movie might play itself out. Technology aside - i am of the opinion that the specific answer for any customer, and any use case is a line my old college economics teacher used to repeat over and over...


Do the math.


The aggregate real world activity of what users in a population of physical desktops are going to do on ANY of the new, or alternative technologies we are talking about these days is not going to change. We have a morning login event. People login to check their Facebook and twitter accounts, read some news.  They then click Outlook. Delete some messages, open others, save attachments to the desktop. We have application activity increasing as they "work". We see a mid-day lunch pause. Repeat the first steps, then save off their work before they go home. Not to sound too simplistic, but what they NEED is a function of what they have and use today (resources and apps).


An example -  we are releasing in Stratusphere 4.8 (eom)  per application i/o measurements. Per user, per application, and of course the ability to group this by OU, location, etc. The ability to peer into 500 or 50,000 users and know, irrefutably, what they did over the course of a day, week, month should help us all in answering this question - shared or 1:1  And then after deploying a pilot or small production environment - the ability to validate the user/app/machine/fabric is behaving is KEY.


Having a heat map of user activity, and doing the math will allow you to avoid creating hot-spots or cold-spots (under or over provisioned resources) - which as the posters have mentioned - leads to either good, or bad user experience and project perception.


Inexpensive user-virtualization is also important in either case. I would argue de-dupe can also be integral. The block level similarity of a population of users in a virtual environment, or any office setting would surprise most of us i bet.


User installed apps and toolbars? Not sure. I would say the majority of the large customers we work with are still scratching their heads on that one. Looks good on powerpoint - but I cant point to any 5000 seat environments where users get to tinker.


DONE:Storage Reduction


DONE: I/O Optimization


DONE: Getting scale to work FOR you (image/app management)


separation of the user space


DONE: DR - ability of your build to be replicated


DONE: Scalability


These seem to be the math equations being worked on and solved today. From an econometric viewpoint - Stateless Desktops TCO should always be less than the 1:1


Provisioning servers are very interesting i think, and do solve iops in shared environments. AppV and Thinapp streamed libraries get interesting too.


We can all build Stateless Desktops with the Lego's available in the market now - scalable, resilient, agile, and cheaper than 1:1


However, that being said, we know that there are other drivers in any project that may favor 1:1


T.Rex


Liquidware Labs


Cancel

good one T-Rex.  Just have to hit this one.


User installed apps and toolbars? Not sure. I would say the majority of the large customers we work with are still scratching their heads on that one. Looks good on powerpoint - but I cant point to any 5000 seat environments where users get to tinker.


Ask VMware about their internal View deployment and the 1:1 desktops.... ask about the biggest complaints if they are linked clones and biggest IT headaches of the ones that are thick provisioned and managed with traditional tools.


as one guy said to me "it all worked fine till I logged in one day and "my stuff" was gone.


In financial secotrs because of regulations? No, they will not get to play. But check out some of the others. They are there. I think some of our hangover is due to the fact that most VDI deployment are eating into environments that WOULD HAVE BEEN terminal services not full desktops.


use cases that would have been full desktops are a different story and often have different requirements


Cancel

One-to-Many or One-to-One, that is the question.


There will always be technology out there advancing to a degree that will enable tremendous cost savings to both methods, but in the end is it really worth it? What are you actually trying to do? Is there actually a need to have duplicated instances?


I will cut straight to the point: One-to-One is a wasteful desktop delivery method that is actually a smoke-screen covering the real issue which is user management. It is used as a poorly designed way to manage your user/app personalization.


One of my current roles is development in a system that many people log into and utilize. They select which role they want to login as and once logged in receive a `desktop` and `applications` that they can use throughout their day to get their work done.


The workflow is generalized to a degree, where all users share common features and workflows. Personalization comes in at another layer (role) to keep the baseline as small and simple as it can be.


Think of this like a tree. Tree Trunk = OS, Branches = Application Sets, Leaves = People.


The leaves may stem from different branches but they all grow from the same tree trunk. Now, if every leaf can be grown on that tree, then we don`t need another tree trunk.


Generic workflows enable single instance management, where the majority of the management is focused away from the OS and Apps and onto the people.


The only downside is that centralized approaches comes with a single point of failure, which we all know too well.


The key to enable IT service management to be as efficient and streamlined as possible is single-instance management.


Cancel

Further to my last post, it's not about which desktop delivery methods require less resources because technology will always change and whether one is better than the other is irrelevant.


What is relevant is the purpose of each, and One-to-One makes little sense to me.


Focus on the real problem: User Management.


Enable single-instance management on both client and server based computing infrastructures while retaining user/app personalization and this is the ultimate solution.


User Virtualization is the king of the three.


Cancel

I'll throw in another wrench here.


I'm in a deployment right now using PVS servers and XenServer.  The problem is the gigabit network is having trouble keeping up with the SAN- so that's our weakest point.


So to address this for specifically the cache, Edgesight data and Antivirus Defs we are caching to the local 'HD' which is attached via fibre to the XenServer pools.


But my problem with your theory, Brian is that it's hard to imagine storing 5000+ individual images, much less assuring hot-site disaster recovery for all of those images.


I suppose the question becomes are you wanting a traditional desktop replacement or to move forward with a more truly mobile workforce.


With Citrix PVS servers, you can have several going without needing a SAN- SSDs are way faster than the current communication frameworks can even touch, so the cost would be reduced significantly by having a few standard images for your users.  Storage cost is barely even a factor, so as long as you're using good profile management you've got a lot of flexibility.


Just a thought.


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close