Is today's VDI really only about non-shared, personal images?

Listen to this podcast

Today's topic is something that BrianMadden.com user "AppDetective" has been talking about in almost every one of the 243 comments he's posted.

Today's topic is something that BrianMadden.com user "AppDetective" has been talking about in almost every one of the 243 comments he's posted. Essentially he's saying that VDI today is limited to persistent, or "1-to-1" disk images. (This is where each user has his or her own disk image, and changes are saved from session-to-session.) This contrasts what most of the big vendors are pushing, which is the "shared" or "non-persistent" model (where the changes are not retained when a user logs off and each new logon boots to a clean image from a master template). Several readers have asked for a full-fledged analysis and discussion of this topic, so here it is!

Some Background: How did we get here?

In the early days of VDI, everything was done with the persistent 1-to-1 images. This is mainly because it's easy. You can practically P2V your existing desktop computers to create the disk images for the VMs in your datacenter that will drive your VDI environment, giving you the essential benefits of server-based computing (Management, Access, Performance, and Security) for not a lot of work. Nice!

The problem, of course, is that datacenter storage is orders of magnitude more expensive than desktop hard drives which means  this solution is orders of magnitude more expensive than old-fashioned local desktops. (This is not to suggest that VDI with 1-to-1 images is never useful, it's just that it would cost a lot, so it's use is limited to the folks who can truly need it for one of the other advantages.)

Of course over time, various techniques for reducing the overall storage of VDI have been creating, the two most prominent being thin provisioning and data deduplication.

Thin provisioning versus data deduplication

Thin provisioning is the concept of a single "master" disk image being used as the starting point for additional derivative images. So with thin provisioning, many VMs can essentially "share" the single master image by mounting it in a read-only way. Each VM's "writes" are written to its own additional disk image file, often called the "delta" image because it contains only the changes that that particular VM made from the master image. When a VM boots up, the disk image that it sees is actually a combination of two physical files--the read-only master and the individual delta file.

There are a few advantages to thin provisioning. First and foremost is that the actual provisioning process happens very fast, since creating an additional instance of the image is essentially nothing more than telling a VM to mount and existing master image and to save its changes to a new image. Thin provisioning is also a great drive space saver, since literally thousands of VMs can share the same master image with their own small delta files.

The challenge, though, is that these "small delta files," if left unchecked, can grow into "large delta files." I mean think about it. Imagine what your laptop looked like on Day Zero--maybe 20GB consumed. And now you probably have 200GB consumed, meaning if your image had been thin provisioned, your delta file would be somewhere north of 180GB! Now think about times as many VDI users as you have!

So after some period of time (maybe even only a month or two), the delta files are taking up so much space in your SAN that it probably almost doesn't even matter that they were thin provisioned in the first place! This is where the concept of "data deduplication" comes in. Data deduplication (or just "dedupe") is exactly what it sounds like: it's the concept of removing duplicate sections of data from a physical disk system. I think every SAN vendor offers some kind of dedupe capability, as do several software vendors whose solutions work no matter what kind of hardware you have.

Most dedupe solutions are out-of-band processes, which means that the data is actually written to the physical disk, and then some process runs (maybe each night) that scans everything and looks for duplicate chunks of data. When dupes are found, the file table is updated and the dupes are removed, thus freeing up space for more data.

So as you can see, even though thin provisioning and data deduplication can both help shrink the overall footprint of your data, the two concepts are actually quite different. But what's all this have to do with VDI?

Non-persistent shared disk images

As I wrote in the opener to this article, all the big VDI vendors are talking about the concept of many users sharing a single disk image. But in their case, they're talking about sharing non-persistent disk images, i.e. each time the user logs on, they get a full brand-new instance of the master read-only image, and their delta image files are not saved when they log off. (If this were a physical PC, it would be like re-imaging the PC every single time it was booted.)

What's interesting here is that thin provisioning doesn't enable the image to be non-persistent per se, because as we've seen it's possible to let the thinly provisioned delta image files survive from session-to-session. But thin provisioning could also be used for this non-persistent mode, which is what the big vendors are talking about.

Of course using non-persistent disk images for VDI is simple for task workers, since all the users probably have the same apps and any customizations would be simple things that could be captured in each user's roaming profile. (In this sense the VDI shared image ends up being a lot like a Terminal Server image that is of course shared by all the users of that Terminal Server.)

But when it comes to "real" users (or "knowledge workers" or whatever they're called now), the whole shared image thing is harder. A lot harder. The specifics why are beyond the scope of this article, but some quick thoughts are:

  • How are you delivering your applications into the image? If you use an app virtualization tool, how do you handle the non-compatible apps?
  • How do you handle the user settings and personality configurations that are outside the normal roaming profile locations?
  • How do you handle user-installed apps?

Where are we today?

The point that AppDetective (and others of course) make is that this whole single-image / shared-image / layering concept, while great, is just not real today. There are just too many complexities and unknowns to do this at any large scale. So until it's real, the only way to do VDI for hard-core users is to give each one his or her own personal disk image. (Again, simpler task worker scenarios can still use the disk sharing method.)

So let's talk about this. Will we ever get to the whole sharing and layering thing? (Actually that's a great topic for a future article.)

For your own VDI deployments using shared or personal images? Is anyone doing "hard core" workers with shared images today?

And would anyone mind if I quickly point out, once again, that if VDI image sharing is only useful today for task workers, why don't people just use Terminal Server instead? ;)

Join the conversation

27 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Brian (and AppDetective)


We can talk about the technical complexities all day long, but really we have to get to the higher level conversation here. The only way we get to this Utopian vision of sharing and layering, is a tectonic shift in the way IT in general thinks.  The concepts like layering, in particular, will take a completely different way of thinking and will be driven from the top down, not from the bottom up.  Today's "monolithic" (thanks Rick D) IT organizations can't deal with this concept yet.


Which gets to my point that, if end point virtualization/desktop virtualization/etc etc etc is to succeed in any form, CIOs and other C-level executives have to "get it".  Right now there are far too many "functional heads" and not enough "business strategists" to make all this happen anytime soon.


My two cents.


Cancel

Application virtualisation is really crucial in a VDI scenario, whether that be streamed and isolated or hosted. Steamed and isolated, makes our base image of something like Windows 7 more modular. Applications hosted and displayed into the build, helps in terms of OS compatibility.


I would highly recommend "non persistent" images. The 1GB "delta" is used as just scratch space. Citrix Profile Manager stores only regular profile settings by default, however it is simple to include other settings. Using the GPO you just add for instance HKLM\software\reuters\whatever for example into the included settings, and voila, it is now part of the user settings.


User installed applications are also manageable, but do require planning. If your applications are prepackaged i.e. virtualised, you can place the objects in the base image in the Citrix radecache. The radecache can be used to "pre -deploy" the applications. This way a user install means I am just creating an icon to files that already exist in the build. Using user permissions, the base build can serve a large and variable user base.


Cancel

There is an important piece missing in this article: if you use one single image for many computers, and you employ delta files per system, that is fine - initially. As Brian pointed out, the delta will eventually grow larger obliterating and space savings.


But that is not really the point. The single image approach is really useful when it comes to management. Roll out a new patch? Easy. Just update the master image.


The only problem with this is that when you update the master image, you have to "throw away" all the delta files.


As a consequence of this "update problem", only two use cases remain:


1) One image per machine (at high cost as pointed out by App Detective / Brian)


2) Single image that gets reset at logoff


I am not astonished that vendors are pushing model 2. The private image model is simply to expensive, so they have no option but to go for the non-persistent single image.


Cancel

@Helge,


That is the exact reason why vendors like Atlantis (and Unidesk?) are in existence. They, for instance, solve this "update base image, throw deltas out" by combining block-level updates with file-level updates so that updating base does not invalidate deltas. From hypervisor's POV, those images as just monolithic disk images on iSCSI/NAS storage but in reality they are dynamically composed.


Now, this is of course (part of) the whole big "layering" concept that there's been lot of talking lately on, also in this site.


Cancel

Kalle,


I am aware of vendors Atlantis and probably Unidesk (and others?) promising to solve the single instance update problem, but I did not mention them because both are at a stage too early for "hard facts". We (i.e. the community) do not know yet if they can live up to their promises. But it will be very interesting to see what they can deliver!


Cancel

And until Atlantis and Unidesk (or others?) will come out with a good scalable and working  solution using persistence (even with a master image)  will be pretty much like having a regular PC in the expensive data center which will need a traditional management  stack.  


Who said SMS? :P


It's kinda like going to eat the cake just to find that its has already been eaten :)


Cancel

The answer depends what you are trying to solve and who you are working with. Most customers I've worked with are doing desktop virtualization because of 1 of 2 reasons:


1. Security


2. Easier management


Is there one type of virtual desktop that will work for these two requriements (or all of the others people might have)? NO


Like Michael said, if you want to do desktop virtualization the right way, you have to get executive level buy-in to force IT to change their ways.  This is not an easy process but the value of changing is too great to pass over. After the buy-in is achieved, you must realize that each group of users are unique. How many groups do you have and what requirements do they have will dictate the type of desktop virtualziation solution you will employ (trust me it is not a one-size-fits-all model).


Will you have 1:1 hosted virtual desktop images? yes.


Will you have 1:many hosted virtual desktop images? yes


Will you have 1:many streamed local desktop images? yes


Will you have offline desktops? yes


I'm sorry but you if you confine yourselves to a single type of solution, you will find yourself in place that is not much better than were you are now with a distributed desktop enviornment.  


As for the personaliation layer, that must be a business decision. We can debate all year about whether we should support user-installed apps (briand and I have already), but this is a business decision.  As for personalization, roaming profiles will get you pretty far. You won't know which apps need more help with until you try.


Of course this brings up the complexity argument.  Have any of you actually managed a desktop environment?  Have you seen how complex it is? How many images are required? How are updates managed? How do you fix a desktop issue because the user did something they shouldn't have? How long does it take to push out a security fix? How easy is it to manage the applications on the desktop? Patch the desktop apps? The list goes on and on.  


If you want to do desktop virtualization correctly, you need to rethink your strategy and realize you will end up with many different flavors.


Cancel

Perhaps im missing the point but a LOT of large enterprises already run an environment where the dekstop is locked down. Sure, profile info can be changed (favourites, backgrounds etc) but these companies dont let users install their own apps and data files (docs, music etc) are stored centrally on storage platforms as opposed to locally


In these cases, VDI makes sense as you can have a set of master 'gold' images (i.e. per dept), then use app streaming (ThinApp, Med-V, Citrix etc) to layer the apps on top at boot time. The only persistent 'delta' here really is the profile data. To that end you will get a lot of dedupe benefits when using things like Linked Clones/Flex Clones


Where this makes it to a business case (where a lot of people I know are mobving towards now) is when you have your 5 year desktop refresh cost. Rather than go spend a lot upgrading the hardware, OS's etc you can keep the existing hardware and just invest in more storage, scale up the backend ESX/Xen/Hyper-V architecure and bolt on the VDI components


Im not saying this fits every use case but when you can serve these desktops either locally (LAN) oor remotely (WAN or HTTPS) it makes a lot of sense


If you suggested to my desktop guys that a user could do whatever they wanted on their PC's and have deltas of 200GB they'd laugh - its unmanaged, unsecure, un-upgradeable, a patch and licensing nightmate etc etc etc. Forget what the users 'want', they get what they're given to do their jobs properly at the lowest TCO to the business


Cancel

To get users to accept moving from a normal desktop to VDI, VDI must offer some some benefits over what they already have.  We can talk about management and lower cost, but what really drives widespread acceptance is if the users see benefits.


At the moment using thin provisioning and throwing away the deltas just doesn't give the users a better experience than a fat desktop.


You cannot in todays world still be thinking like @JoeShmoe and tell your users tough! You get what you are given.


If we have the desktops under our control we should be able to employ tools to give users a better experience than they currently have.  At the moment I have to agree with AppDetective, in that the only way to do this (without Atlantis etc.) is to give users their own persistent OS.


Cancel

@Jim


with any large project, IT needs to sell the solution to the business (users).  One of the major selling points I've heard customers talk about to their users is the following:


1. Your desktop experience will be the same with what you have now


2. You will receive the latest application updates faster


3. Most desktop issues can now be resolved with a logoff/logon (seconds versus days)


Cancel

@JoeShmoe - For the locked down user app free environment you've described TS (XenApp) will make even more sense. (And already has for years)


Really people who missed the "TS Train" years ago for lack of <somthing> are now thinking about buying their own "VDI Prius" just because its hip!


The "Train" is still there and might be a better choice for you :)


Cancel

Maybe I’m also way off base here but I think this continues to show that the VDI space is still not mature enough for large scale implementations, especially when you are talking about non-persistent images.  I agree with Daniel 100% that anyone that is locking themselves into one type of solution is heading straight for a train wreck.  While VDI can be used to solve tactical problems today, as debated in many threads organizations need operational efficiencies, better user experiences, and most importantly CapEx and OpEx savings which VDI can’t deliver on today.  This is because of the fact that you have to use persistent images and because that model doesn’t work for everyone.  My opinion is organizations need to wait until the space continues to mature especially with the facts that management suites are coming and that client hypervisors could (notice I said could) change the thinking around all of this.   I think the best strategy right now is to focus on the low hanging fruit and solve that with technology that is proven and sound (terminal services/XenApp).   Seriously, like Brian mentioned in his article if any organization has a good percentage of task workers why wouldn’t you spend time focusing on virtualizing those desktops on top of TS.  Also spend time converting your applications to virtual packages through whatever technology fits your business requirements.  It’s not like that is going to be wasted time once layering actually works or some of these solutions come to fruition you’ll just be further down the road to that utopia we all hope we get to some day.  This is what our strategy is.  If this first step takes us 6, 9, 12 months whatever, hopefully by then a lot of these challenges will be addressed.  From there we can smoothly move into the next chapter of the story whether its traditional VDI, client hypervisors, local streaming, combination of them all, etc.  


Cancel

@Run Kuper


Ron, we did look at TS a while back but our strategic policy is all servers must be virtualised (windows, solaris and Linux) unless specific exemption. We're also0a VMware shop (again no Citrix nless by exemption). A few years back we did look at Citrix Presentation Servers on VM's. And ESX and Citrix didnt play nice( one presentation server VM for about 10-15 users max). And even that was killing the CPU. So for a desktop estate we'd end end up with 1000's of Citrix s VM's, plus the licenses Also not all apps could be hosted via TS as we found during tests.


Clearly you can make the same argument for View/Vsphere although the claims about 16 desktop VM's per core on say Nehalem seem to make the desnity a little better.


So in our case VDI is possibly a better option now its maturing and standards such as PCoIP and HDX are coming along. Seems with vSphere and View for example its 16 users per core which is probably subjective and I look forward to someone load testing it (Brian!)


As has been said its all about the use case anyway


Cancel

It doesn't matter if the number of users per core for VDI increases (due in most part to better processors), as RDS/TS will benefit as well.  The reality is that one can get 4-5X the number of users on a piece of hardware running RDS than the can with VDI, so I always challenge customers to define "why do you perceive that you need VDI.  What is it that RDS doesn't do, or have you even considered RDS".  Sometimes there are very valid reasons for "some users" to go with VDI, but I'd guess that 75%+ of use cases for VDI could easily be addressed with RDS, without SAN Storage, without a hypervisor...  SAN and hypervosor make RDS "more manageable", and I usually recommend virtualizing RDS, but that's another topic.


Cancel

@Joe,


So you are saying that just because you have a policy, economically more suboptimal solution (VDI) has to be selected instead of better one (TS)?! That's just what's fundamentally wrong in todays IT world, doing things for the sake of technology (going all virtual in this case) instead of what's sensible to, gee, I don't know, business! No wonder IT guys generally have hard time talking to people sitting on top of moneybags..


Anyhow, like has been discussed before, I have yet to see compelling reason to do VDI for task workers/locked down environments if compared to TS. And mind you, I have not been big fan of TS in general in the past, but most of the time this present VDI -related talk seems even more ridiculous. There is use cases for it, but it's not the silver bullet some would like us to think and the storage is only beginning of the problem domain.


Cancel

I totally agree with Daniel's point that one size most definitely does not fit all and that multiple solutions will ultimately fulfil the needs of an enterprise. However, one thing does become very clear and that is that personalization across those delivery models cannot be ignored. It becoes a more essential aspect of the deliverable.


If we come back to the original topic of technically whether the shared image is workable today, we need to ascertain what data we believe that we actually need to persist between sessions (assuming that we do not wish to have the delta disk - for the reasons that Brian and Helge point out).  As I see it there are three main aspects that may require persisting:


1) User Personalization information (desktop config stuff, icons, application configuration information etc)


2) User Data (documents etc)


3) User Installed Applications


Now we know that we can go some way (as Daniel suggests) to dealing with User Personalization with a roaming profile, and where/when the enterprise reaches the limits of such an approach, there are multiple vendors (ourselves at AppSense included) who can take the Personalization options to the next levels quickly, easily and painfree.


User Data is typically manageable via using ordinary network shares or simple folder redirection of the My Documents and the like.  Given that the virtual desktop is going to be online, we need not worry about the user data beyond this at this time.


We agree that the User Installed Application is under debate as to need - but for now let's just assume that we agree that the enterprise in question does not need this level of functionality for now, hence we can safely "ignore" for now :-)


So, what is actually preventing us from being able to make the shared image concept a reality?


I don't see anything else of significance preventing this enterprise from being able to realize the technology today...


Simon Rust


(AppSense)


Cancel

The questions that many fail to ask are IMHO


1) Why use a Desktop Operating system? There are lot's of good reasons for this such as consistent service delivery.


2) What is the service level you are trying to provide? This impacts your decision to use a shared operating system architecture vs. single session OS.


3) How mature are my IT processes? Depending on this answer will help you figure out which model to choose and when.


I talk to customers all the time who ultimately understand that the distributed computing model does not meet their ever growing needs. They understand the value of session mobility i.e. connecting to a running session from anywhere. After that it's a systems management model discussion, that also helps drive the decision on which model/OS to choose to implement on.


I agree with Michael Keen that's it's often a much more strategic discussion that too often does not happen. It's about wanting to transform how things are done and thinking and investing about what's possible as opposed to getting lost in the constraints of today.


Cancel

Hah! Citrix Blog topic today "Designing a LARGE Desktop Virtualization Solution," for a 60-100k user desktop environment. Desktop assignment type? 1:1 :)


Cancel

A good article and pragmatic discussion.


There is obviously no silver bullet but as many have pointed out, TS should be an option that's explored when finding a solution to a specific problem.


A solution looking for a problem does nothing more than consume cycles of engineering time for limited benefit.


The use-case strategy (not the solution design) is the most valuable document you can write at the moment.


In reference to C-level support...how can you possibly get support for a solution that is more expensive that current when balanced with a lot of intangible benefits. You won't.


Cancel

I'm kinda uncertain with this vdi thing. If anything I'd go with 1on1 when/if the storage issue is better handled. Todays dedup most certainly is not the solution. Not in my head at least...


As of now, I'm with the people sayting that the shared model is quite better served by traditional TS/RDS. So, I'm waiting....Meanwhile I'm also waiting for this Client Hypervisor thing to materialize. Boy, time sure moves slowly....bah ;-)


Cancel

@clayton I've written about use cases here community.citrix.com/.../Mending+broken+hearts+with+XenDesktop+4


Lot's of c-level folks get the impact that Desktop Virtualization can bring to their business irrespective of model. I don't really care how anybody implements. I care if people understand why. Which is often lost in narrow cost conversations looking at initial capital costs as opposed to organizational transformation. Models like Desktop Virtualization lend themselves to driving greater standardization in an organization and help shift towards automation. The % of IT budget spent on operational tasks in effect diluting the value of IT to the business is shrinking. I hear that all over the place from C level folks. They are actively seeking out ways to drive efficiency by investing in enabling architectures that increase efficiency and enable new business capability.


Cancel

hmm, alot of vendor fud going on here as you'd expect...


pushing the one solution doesnt fit all, because thats the differentiator that your company fulfills. sure it can help, but @daniel, patching all variants of your recommendation is a complete pain in the *** and requires highly skilled operators. why not suggest something to lower the opex costs for once ?


IT needs to deliver flexibility, yet keep the operational tasks simple. none of what i am hearing here is exactly that.


CIO's don't need to buy in on the technical solution, they need to buy in on service delivery and trying to lower opex and capex costs where they can.


VDI has a place and TS has a place, but combining the two is twice the hassle, while its nice technically as a solution - its twice the mgmt, for two distinctly different platforms.


appvirt is definately a help with either solution, user personalisation can add a benefit, but appvirt solves alot of the app personalisation issues, along with redirected folders i don't need user personalisation.


get your heads outta your asses, and make simpler solutions for IT staff to manage. the CIOs will then listen, as they don't wanna continue paying us highly paid technical guys to manage desktops and apps.


I bet if you could deliver a solution that could be managed by the average PC support guy, the CIOs would be interested, as the complexity is one of the downfalls for all hybrid solutions.


KISS still applies to this day. thats my 02 cents.


Cancel

@ Simon Rust


Total agreement here Simon.  


I see only two real problems:  Persistance or psudo-persistance of user installed applications, and our old friend FUD.  What I find suprising is that in this instance much of the FUD is internally generated.


Regards


Simon Bramfitt


Cancel

Sorry to chime in late here, but better late than never...


There are a couple of things that bother me about this topic.


1) The claim that "In the early days of VDI, everything was done with the persistent 1-to-1 images." is just plain wrong. If we could ever get Brian to keep one of the appointments he's asked for with Virtual Bridges he would learn that our VDI solution predates VMware's by several years and our architecture always employed stateless, gold master sessions coupled dynamically with a persistent user data image


2) The second thing is the conclusion that "this whole single-image / shared-image / layering concept, while great, is just not real today." Hello. See above. Right here. When will people start expanding the conversation beyond VMware and Citrix? Thankfully, IBM gets it. And, so does Austin Ventures who are now the backers of Virtual Bridges.


To elaborate on some technical points:


1. deduplication is a hack for needing duplication in the first place, which is an element of really bad design; despite the slickness of deduplication technology it is complicated and expensive, and is only a patch for the problem - you still need to manage separate stateful images which is no different than managing individual PC's, and therefore a non-starter for any serious VDI implementation plan


2. delta files are also a bad approach, because they need to be thrown away when the master image is updated, and again are a patch for a bad design (see #3)


3. VERDE separates user and system data in virtual desktops of any type (Windows XP, Windows 7, Linux, etc.), therefore the "deltas" are persistent across gold master updates (this has been proven over 5 years of field deployments).  Also, the system administrator assigns a cap to the size of the user "image" for each gold master deployment ahead of time, so there is no such thing as a "runaway" delta file - you plan for the peak storage you will need (see #4), and this method is better than roaming profiles for various reasons (see #5)


4. requiring expensive SAN for VDI is both a myth and an artifact of poor early (and some current, such as View) VDI implementations. VERDE uses NAS for shared storage and connections are limited to just the servers themselves - there is no 1:1 VM->storage requirement.


5. roaming profiles vs. VERDE user persistence technology - VERDE wins here for various reasons:


a. roaming profiles are a practice many organizations refuse to consider (this is based on talking to customers for years, not conjecture or surveys)


b. roaming profiles typically carry a long log-in time than local profiles - sometimes much longer (measured in the 1 minute+ range in some cases)


c. roaming profiles only work for Windows virtual desktops, which of course is fine and addresses most of the market, but encourages lock-in for the future (like when Linux desktops become more and more useful to Enterprise in the years to come); limiting vendor lock-in at every level possible should be an important consideration for forward-looking organizations deploying VDI today


d. roaming profiles are vulnerable to VM crashes and user aborts because you have to log off completely in order to have your data preserved - VERDE overcomes this by writing user data to persistent disk much more frequently


e. even if roaming profiles are used, the VERDE mechanism still helps to greatly speed up logins because some per-user local persistence remains to keep the login process reasonably quick; most of the penalty of domain logons (which are typically required for roaming profile use) is creating the initial user profile bits that are assumed to be persistent - if these bits are not there, they are created each time and a huge time premium is paid; VERDE preserves these bits between logins to eliminate most of the overhead


f. with or without roaming profile use, the VERDE mechanism provides additional per-user persistence that can be used by scripts and/or custom applications to improve the user experience or automate IT processes - this is the per-user space you get "outside roaming profiles"


In summary, VERDE marries the benefits of thin provisioning with a far superior management model that enables centralized updates without sacrificing per-user persistence, and without runaway storage requirements and costs.


Cancel

@Ruste it is admins that need to get their head out of their behinds in many cases. It's very easy to simply dismiss this as a vendor problem alone to solve. The reality is that admins do nothing but think about constraints all to often. I'll commend the vendors above who have put in their 2c's Certainly Quest offer both VDI and TS, and an easy to use console. Citrix has many offerings and articulate the business value well and Appsense offers some solutions as well and I'll reserve judgement on Verde until I've had more time to look into it. What seems to be missing from your argument is the realization that current desktop environments are a mess and nightmare to manage. The average PC guy has no clue and adds little value. The idea of moving to a different model to alleviate admin FUD is a good thing. Sure there are challenges but people miss the big picture of what this means in the long haul.


What's not mature is peoples understanding of why and how. Too easy to dismiss the importance of this space, when people don't get it themselves. There's a reason people are implementing, perhaps ask what are you not getting if you are not.


Cancel

@appdetective I have implemented a number of times, TS/SBC in the 10s of thousands, and VDI to the thousands. I appreciate where you are coming from. A lot of admins are slackers, I agree. But unfortunately this is the norm at alot of sites I attend. And the CIOs / mgmt don't have the expertise in house to adopt to a mixed TS/SBC/VDI/Profile Mgmt/App Virtualisation environment, and they probably can't afford to hire contractors/vendors/integrators for a long period of time post rollout on-top of the capex costs. These are the hidden costs to making a project become a success or failure.


For us on the list who know how to manage all this technology, its what we do everyday and we are comfortable with it, but it might be a bit much to expect every customer to have the skills required. But I would love to be proven wrong here. It would make more projects successful.


My point is some customers need to bite off small chunks at a time, rather than a hybrid of technologies all at once. (unless you're lucky to have a team full of gun architects & admins).


As I said, maybe I am wrong, it could just be the last 10- 20 customer sites I have been too that has me doubting their ability to administer a hybrid solution.


Cancel

I agree with Dan and the others that have stated you need to use a bunch of VDI solutions to make this work, and all that said you probably still have some users that must use "FAT" So why would you want to manage all of this?


I want to suggest a couple options:


Organizations need to get serious abut delivering applications to the browser or RIA through AIR/FLEX Silverlight or other technology this is similar to appvirtualization in it’s impact on managing and updating the applications. Examples of this can be seen already in Oracle, Salesforce.com and Faceboook.


Keep the existing model, while tempering your costs and desktop refresh with terminal services/XenApp for all knowledge/task workers.


Push the vendors to continue to develop cross platform capable application virtualization techniques/streaming and forget about virtualizing entire desktop operating systems, to quote another blog it’s “stupid”.


I agree VDI has it’s place but it’s so small it barely hits my radar. School labs, and Dev shops that’s about it (maybe one or two others small use cases).


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close