Prediction: VDI will be ready for wholesale desktop replacement in 2010. Here's how we'll get there.

NOTE: This article was written over a year ago. Since posting this original, I've also posted a half-way point follow-up in June 2009 to see how my predictions are coming out far.

NOTE: This article was written over a year ago. Since posting this original, I've also posted a half-way point follow-up in June 2009 to see how my predictions are coming out far.)

Today, VDI is a niche solution. What's keeping VDI from taking over all enterprise desktops? There are four basic technical capabilities that are required for it to become mainstream:

  • Single disk image for many users
  • Remote display protocols that are indistinguishable from local
  • Local / offline VDI
  • Broader compatibility for app virtualization

All four of these are coming very soon. In fact, they'll all be in-place and mature in the next 24 months. Let's take an in-depth look at each of these technical capabilities that are still required:

Requirement #1. Single disk image for many users

VDI is fundamentally about regaining control of enterprise desktops. In traditional dekstop environments, each desktop has its own hard drive and therefore its own unique disk image. With VDI, you pull all of these enterprise desktops into your datacenter. Do you really want to manage one disk image for each user? Of course not. From a fanancial model (in terms of SAN storage and Patch Tuesday efforts) it doesn't even start to make sense.

But what if you only had to manage one single "gold" master image that all users would share?

All users sharing a single master image? Sound impossible? Sure, it sounds impossible the first time you hear it. But remember that this is exactly how Terminal Server works today. Every single user loads the same generic desktop when they connect to a server. Then we use app streaming and roaming profiles and login scripts to customize that generic desktop for the user. The same will apply in the VDI space.

Why this will be solved by 2010

Citrix Provisioning Server / Ardence already solves this problem today (as it has for years). So we don't need to wait for 2010 for this technology to be real.

Beyond Citrix, VMware demoed some technology called "Scalable Virtual Images" (or "SVI") at VMwold Cannes this past January. SVI is VMware's version of this concept, where a single disk image is shared by multiple VMs at the same time. (This differs from VMware's current shipping approach, where a full "clone" must be made of the master, so each VM is one-to-one with their respective clone.)

In addition to these technologies handling the actual mounting / mapping of disk images, they also handle the mechanics and logistics needed for multiple running machines to share the same disk image. At a minimum, they must take care of things like the fact that each machine needs its own computer name and SID. Ardence handles this by intercepting disk calls and pulling requests for registry keys containing SID and computer name details from a database instead of the actual shared image. VMware's current solution is to sysprep the master, but this leads to a somewhat involved process for the creation of each clone (as sysprep was designed for deploying permanent physical PCs instead of VM disk images).

What we'll see moving forward is something akin to "fast sysprep," where the VDI companies identify which disk blocks in the disk image file contain the key information (again, SID and computer name, for example). Then when a new VM needs to boot from the master image, the "fast sysprep" (or whatever you want to call it) will simply and almost instantly "pre-create" a disk delta file that's just a few kilobytes in size from a central database that has the initial customizations of the master that machine needs to boot up.

To make this 100% transparent to the guest VM, this would need to still be done at the disk block-level. This would mean that these delta files would be invalidated every time the master image was updated, but since they can be created instantly and on-demand, this is not a problem.

Requirement #2. Remote display protocols that are indistinguishable from local

I've also written in-depth on this in the past. The short version is that right now, we have two ways for delivering applications: (1) Terminal Server / Citrix XenApp, and (2) local / the "old school" way. Why aren't we delivering 100% of our apps via Terminal Server or SBC? One of the reasons is that today's mainstream display remoting protocols just aren't there yet. There are some apps that are too graphically-intense that just won't work via ICA or RDP.

By 2010, all of the VDI products will have remote display protocols that are indistinguisable from local computing.

Why this will be solved by 2010

Qumranet's Spice protocol is 100% real and available today (albeit only as part of their Solid ICE VDI product). Teradici's PC-over-IP chip-based hardware is real and 100% available today. Both of these protocols support all types of apps with performance characteristics that are indistinguishable from local computing (given enough bandwidth).

There are two more promising protocols on the horizon. One would think (hope?) that Microsoft's acquisition of Calista would produce a baseline RDP product with some phenominal capabilities that are real within the next 24 months. We also have VESA's Net2Display. While that's been delayed several times, hopefully that's also real in some form in the next two years.

The bottom line with regard to protocols is that with what's real today and what's coming soon, this should be a general capability that's available to whomever needs it in June 2010.

Requirement #3. Local / offline VDI

Today's VDI solutions are server-based computing (SBC) solutions. Sure, they're connecting to Windows XP instead of Terminal Server, but fundamentally they're still SBC.

But what if we can run a hypervisor or VMM locally on a client device? What if we can run our Windows XP VM locally? This does two great things for us:

  1. We don't have to worry about the protocol problem as outlined in Requirement #2.
  2. We can potentially run the VM offline, removing the single biggest downside of SBC.

Remember, SBC has many advantages. Central management, instant access from any client, great performance for three-tier apps, and "eyes-only" security. Running a Windows XP VM locally is not SBC and is not appropriate for all scenarios, but, where SBC-solutions don't work, being able to extend an existing SBC-based VDI solution into the local / offline world will be huge.

Why this will be solved by 2010

VMware has had their ACE product for years that was a basic version of this. At VMworld Cannes earlier this year, VMware demonstrated what they're calling "OVDI," or "offline VDI." Think of OVDI as what happens when VDI and ACE have a baby. You can right click and "take offline" a remote VDI instance. You can run it locally, offline, reboot it, etc. When you're back in the office, you can right click and "take online," syncing your disk image deltas up to the server.

This OVDI concept is not pie-in-the-sky "someday" technology. This is actual prototype stuff that we saw running live at VMworld.

Another positive factor we have in this space is the fact that Microsoft bought Kidaro this past March. Kidaro was a management wrapper for Microsoft Virtual PC that gave it a lot of ACE-like abilities. At this point there's nothing to synchronize Kidaro with on the backend, but I'm sure Redmond is up to something.

Qumranet announced "Splice" last week at BriForum, which is technology meant to help move VDI instances closer to users in WAN environments.

Even though all of these are just basic sets of functionality or just prototypes, there's enough going on in this space now to know that this will be solved in a big way by June 2010.

Requirement #4. Broader compatibility for app virtualization

One of the real benefits of local PCs today is that power users can install whatever apps they want. This is not possible in a Terminal Server environment since a single app would potentially be available for everyone and potentially really screw up the system. Sure, admins can use remote application delivery (seamless apps delivered via ICA from XenApp) or application streaming (SoftGrid / Symantec SVS / Citrix Streaming / VMware ThinApp / etc.), but there are two problem with these technologies today that are preventing widescale VDI replacement of physical PCs:

  • Not all applications are compatible with the app virtualization / streaming products of today.
  • Today, only admins can package apps for virtualization. There is no "user self-packaging" option.

Solving both of these app virtualization problems will enable Requirement #1 listed above because we'll be able to truly operate the desktop as a "layered stack," with the OS layer provided via VDI, and then the apps and user environment layered on top of that.

Why these will be solved by 2010

In terms of broader app compatiblity with app virtualization technologies, that's just a slow march towards an ultimate goal, with more and more apps becoming compatible day-by-day, month-by-month.

With regards to "user self-packaging" of apps, one of the downsides of today's app virtualization products is that only admins can package, prepare, and/or approve the apps ahead of time. If we want to truly give power users the power to control their own environment, we need to let them install their own apps. Unfortunately the whole "sharing a single master disk image" thing is fundamentally not compatible with users being able to install their own apps.

But what if the user environment management products were smarter? What if the app virtualization products were smarter?

The way that many applications are packaged for virtualization or streaming or isolation environments today is that an admin goes to a "clean" machine, clicks "record" in the packaging software, installs the application, and clicks "done" in the packaging software. Then the packager bundles up all the registry changes and files that were added into the package that's to be distributed.

But what if the user environment product could put the entire user's session in "record" or "package" mode? Then the user could install some random application whose settings could be abstracted out into a "personal applications" layer of the stack.

I don't know of any products that do this today, but many of these things are getting close. I don't think this would be too far of a jump.

(For what it's worth, this might not be a hard requirement if you believe in the employee-owned PC concept, as in those cases you could limit the corporate VM to centrally-managed apps.)

Why June 2010? Why not June 2009 or December 2008?

We have four key technical capabilities that must be in-place before companies can start the wholesale replacement of "old school" desktops and laptops with VDI-based desktops and laptops. Many of these technical capabilities are available in one form or another today, and many others will be available a lot sooner than June 2010. So why am I predicting that this will take 24 months to shake out? Several reasons:

First, VDI is bleeding edge today. Sure, there are some interesting and specific use cases that make sense. But no one is really going to VDI for general desktop computing across-the-board. So let's say that Citrix or VMware or Microsoft enables one of these key technical capabilities in the next few months. Do you want to be the first to implement this and see what happens? Really there's no hurry. Are your current desktops burning a whole in your pocket? Is there any real reason to replace everything you have now?

This space is going to change so much over the next two years. If VMware releases some cool feature, you know Citrix will one-up them, then VMware will respond, etc. Plus, all of these technical capabilities that come out over the next 6-9 months are all going to be v1 things. So we made it this far with the dual technology (old school local + TS-based Citrix) apps. Why not wait a few more months or a year?

There is no pressure to be bleeding edge. Don't be tempted to jump on the VDI train right now (unless of course you have a specific tactical reason to use VDI today). Save your money. Take a year off.

Second, most people are waiting for Windows 7. Even in June 2008 (18 months after Vista), people just aren't deploying Vista in a big way. At this point, people are happy enough with Windows XP. I can't tell you how many conversations I've had with companies over the past year where they basically say, "We're skipping Vista and waiting for Windows 7. And when we do Windows 7, we're not going to do it in the same way that we've done things all these years.

In June 2010, Windows 7 will be out. The four major VDI problems will be solved. Everything will be in place to do VDI in a big way.

June 2010 - June 2013

Beyond June? Second-half 2010 and into 2011? This is when VDI seats surpass SBC seats. By 2012 / 2013, we're seeing VDI seats surpass the number of "old" seats in enterprise environments.. 300m VDI clients by 2013.

A quick note about Terminal Server versus VDI

Once this VDI thing takes off in a few years, we probably won't see many published desktops in TS environments, because the advantages that you get with TS over VDI will largely by gone.

However, using TS as a basis for XenApp seamless apps delivered via SBC is a huge use case. VDI is about desktops. XenApp is about apps. This mainstream VDI thing will largely replace managed desktops. But many of those desktops will receive their apps (or links to apps) via traditional SBC. (And by the way, the better quality remoting protocols will just help TS-based app delivery be that much stronger.)

Join the conversation

33 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Hi Brian,


 


There are a few of additional things that I see need to be solved such as licensing in virtualized environments but I can see this getting solved as customer pressure builds.


 


 It is amazing to see how quickly we are moving with VDI - look back 6-12 months and all discussion was far less focussed than your 4 points above. Back then it was VDI vs. TS and 'does this make any sense'. Where we have got to is an understanding of how we will do VDI and that it involves multiple delivery technologies used together.


 


What we see from our customers is that the next two years will be about pilots moving to early implementations and nailing the management of this stuff.


 


All the best,


Martin Ingram,


AppSense

Cancel

Agree with all the comments here but still see nothing that is going to change the basic maths that XenApp is going to be cheaper (for some groups of users) then either XenDesktop or VMware.    I understand all the issues around application compatibility etc that comes with XenApp but for environments such as call centres where large numbers of users need a simple desktop it is always going to be a cheaper option.  Unless anyone can explain to me why this may not be the case ?


Cheers Les


 


 

Cancel

If Brian is correct and VDI will replace TS-delivered desktops, what are the advantages of delivering apps to virtual desktops via TS over delivering apps using ThinApp or Softgrid? 

Cancel

Maybe this response requires its own article, but remember that server-based computing gives us four advantages. (This is stolen directly from Citrix's 1997 marketing literature.):

  • Management (centralized installs, etc.)
  • Access (from any device)
  • Performance (for three-tier apps)
  • Security (data stays in the datacenter)
The future VDI that I lay out (Call it VDI+ or something) could be server-based or client-based. In the case of client-based VDI, there will still be specific applications and use cases where server-based seamless apps will make sense. So really, for individual apps, you have to look at what the specific scenario is. Some apps will have data access requirements that will dictate them to be served seamlessly via SBC into a VDI+ desktop, and some apps will not so they can be streamed into the VDI+ desktop to run "locally" on it. (Wherever that "locally" is...)
Cancel

Agree 100%. We will never have a monolithic solution, and for very narrow niche tasked-based whatevers, TS-based desktops might still make sense.

On thing that's interesting is that I think the cost delta between the hardware needed to serve a TS-based desktop and a VDI-based desktop will continue to decrease--possibly even to the point that it's not a significant factor in the architecture decision process. Things like more cores and more memory that TS can't really deal with, advancements in hypervisors that better leverage memory sharing and CPU scheduling across VMs, and increasing application requirements that run in every session on a TS (.NET Framework in EVERY SINGLE session!!) will help push this delta down.

Cancel
Brian when you mention the personal application part of the stack. Gartner refers to this as the 'Bubble'. An example product is mojopac by ringcube.
Cancel
A comment or question on the issue of a single disk image for all users.   My understanding is that any VDI type solution will be built up of "building blocks" or clusters.  This is dependant on the number of VM that can be run on an ESX / Xenserver and the number of those servers that can co exist in a cluster - for instance VMware can host 128 VM per ESX server and can support 32 ESX servers per cluster.   This gives you a cluster size of about 4000 users.  If you are designing a large VDI solution - say 20K users then you will need 5 - 6 clusters depending on load, fail over etc.  Each cluster has its own provisioning servers - I think 5 per cluster is the number for XenDesktop.  Therefore you no longer have a single image but rather 5 or 6 distributed across different clusters.  Is this an issue ? i'm not sure ... the images are just files so can be copied and distributed easily using standard scripts or management tools.  My point / question is that as neither of the main vendors - VMWare and Citrix seemed to have addressed this issue.  Is it therefore not an issue or is there a simple solution to this perceived issue.
Cancel

Don't forget that VDI is exactly the same as TS with respect to multiple instances of the .NET Framework across sessions.  Plus you've got the baggage of all the other OS processes (which are at least shared on TS).

Shawn

Cancel

I agree with many of your points Brian. However, I believe that VDI adoption will be faster, and this will be due to major pricing adjustments on the hypervisor end of things. in about 6 months from now, when Hyper-V begins to gain traction VMware will have been forced to slash their pricing dramatically. Hyper-V will make virtualization, inclluding VDI, a viable option for most businesses (opposed to VMware that is not typically deployed by SMB). There are obvious benefits of VDI to SMB market such as eliminating the cost/effort of PC's and support.

we are seeing a large number of VDI pilots taking place already and I am pegging that VDI will become a real feasible option for many businesses within the next 6 months.

Cancel
What about the profile solution?  Or do you think Citrix UPM is mature enough?
Cancel

This goes for all of you...You've lost long ago. Do your control stuff in your nearast S&M club.
/15+ years in the industry./Coward Lion

Cancel
sorry - that comment was intended for Brian.  Brian - I think #5 should be profile solutions are not mature.  But what do you think?  Do you think Citrix UPM or another product (AppSense, RTO, etc) are mature enough TODAY?
Cancel
Brian,  I agree with those being the top 4 issues that need to be solved in order to get VDI into the next level of large adoption.  What about the datacenter issue?  In most firms, in particularly the financial sector dc space is at a premium and we're competing for the same space with the core applications that run the buisness.  We also probably need to promote a different types of dc for VDI purpose.  I don't think we need a high tier dc for the purpose of running a desktop image.  Thoughts?
Cancel

Brian great article.  I agree with your 2010 prediction.  I think there are some other factors putting VDI in the drivers seat are advancements in hardware and OS (hypervisor).  In 2 more years a 16 core/32GB RAM server will be dirt CHEAP and ENTRY LEVEL.  Couple this with vastly more scalable 64 bit hypervisors that can exploit them and the arguments about "user density" shrink pretty rapidly in this scenario.  All the pieces are in play now. I echo that with 2 more years to coalesce. TS will start to fade away. We have real competition in this arena which I think will produce far more innovation than the Citrix/Microsoft monopoly on TS. 

Cancel

Exactly. Big difference of virtualized desktops in a TS/Citrix environment than a VM. As now you are talking about individual OS's & all the services and processes that go with it (not to mention device drivers, virtual NIC's etc)


2010 might be realistic, but one wildcard that wasn't discussed is the applications themselves. PC's have become extremely powerful for what they cost today - the result? Bloated and resource hungry applications. Just look at the difference between Adobe Acrobat Reader 6 vs 7, Office 2k vs 2k7, IE6 vs 7 (with MM plug-ins). Point being, how many VDI's with this typical application mix, designed to run on a current-day PC (dual core 1MB, 2.00GHz, 800FSB; 1GB DDR2; and a GPU with at least a 350MHz RAMDAC and 128MB graphics memory) are you going to be able to run with similar performance? My guess? Not too many.

Cancel
Ahhh you could with Parrallels Virtuozzo.  That was one of the coolest things I got from this years Briforum (session: Hardware virtualization versus OS virtualization).  I just wish that Parallels was there to talk more about this product.
Cancel

Another aspect that I believe needs to be addressed in VDI is graphics performance.  As we have seen with VISTA and will see with Windows 7, the UI is becoming more graphic intensive and 3D.  Today VDI does not support any sort of hardware acceleration for graphics performance so this all has to be done on the CPU, causing higher resource usage an the host system, and poor performance.  It doesn't matter is you are in an online or offline VM, today you only get software rendering. 


Thoughts?

Cancel
That was mentioned in Requirement #2.
Cancel

If I have a single disk image at my data center and want to deploy that to bare metal via Provisioning Server for Desktops, there are some obvious bandwidth issues.  Some of this can be solved by using the ICA portion of XenDesktop but as mentioned in the Technical FAQ from Citrix, PortICA is not the same as ICA.  There are limitations. So if I want to leverage provisioning at remote sites, would a WANScaler help me achieve my goal?

http://www.citrix.com/English/ps2/products/qa.asp?contentID=163057&faqID=1340768&title=Citrix+XenDesktop%2C+Technical+FAQ
Cancel
Matt - you may be right, but in 2 years time it could be that the entry level PC is also up to 4 - Gb RAM plus Quad Core CPU just to run the new v5 Adobe CS? ;-)
Cancel

Chaps, Brian's comment was about the display protocol getting the images from A to B, I think you'll find the above point still valid.


Unless somethnig radical happens witht he way Developers code their apps (i.e the ability for the app to run "sans" graphics card, and effectively run optimised headless, knowing that their output is only going as far as the vide buffer) and even this may not be enough then we are still facing having to engineer or architect our way past 10 - 15% of applications that have a heavy graphics load.


Just think of things like Autocad, Creative Suite, Video Editing, GIS products, etc. - all of these have a heavy graphics *output* - regardless of how cute you get that info still has to arrive at the users device, and in some cases with Medical applications the FDA insists that the full content must be sent to the client.


So from where I sit the graphics load is still going to be an issue - it may be less over the next year or two - but it will still be something that represents a hurdle to be overcome.


During the recent Citrix Synergy here in Sydney I did quiz one of the Citrix XenServer specialists over from the US and he did intimate that something like a GPU Hypervisor was under discussion. 

Cancel

WANScaler or the Citrix Branch Repeater will certainly help for the actual file transfer but you will still require a Provisioning Server at the remote location. This is because the client machines contact the provisioning server to fetch disk blocks, they do NOT go direct to storage.


 At the memoer this means managing PVS for each site but in the 5.0 release there will be the actual concept of sires in the PVS farm to allow easier management of this model.


Another feature available today to make this easier is the ability to create incremental delta files that can be applied to a disk image to update it. This means that you can have PVS generate a Delta that can be used to either run the disk or patch it. MUCH smaller file transfers for updates, maybe even DFS'able.


 My $.02

Cancel

There are a lot of pragmatic decisions going to be made on which delivery technology is used for which applications. Choices are:



  • Pre installed into a provisioned OS

  • Virtualised Applications (streamed, isolated, etc.)

  • Published

Each has its own pros and cons and a lot of thought is going to go into picking the right tool for the job. As Brian says this could easily become am article in itself.


Martin Ingram


AppSense

Cancel

I think the answer to this (and BTW, that was my Guest reply to Shawn Bass "Big difference...") is in the desktop appliance, Thin-Client, or whatever it gets called next week. Client re-direction to process the graphics & on-board plug-ins for multi-media is already here. With Gigabit to the desktop, streaming and powerful appliances make more sense than huge server architecture development. As most likely, applications will have to be re-written to take advantage of the new features (remember MMX, HT?). VDI still has a way to go.


I don't think hypervisor in in itself will be enough. There will have to be other hardware breakthroughs as well. You can have 400 horsepower, but if you're running on bicycle tires, it's hard to put that power to the pavement.


 


IMO - Applications like AutoCAD, GIS, and any user who fits the "PowerUser" category, should really still get a PC. Virtualization is for the masses, who would otherwise have 80% of the resources available to them in a modern PC - waiting for something to do.

Cancel
I agree with you Brian, it may be mature enough to replace desktops BUT wholesale desktop replacement will not be the norm, even in the next 7-10 years.  THere is just too much legacy infrastructure to make it happen.  What we will end up with in the interim is your average organization having a hodge podge of some virtual some, XenApp, and some traditional.  Most of these will be running on top of legacy machines and not thin client hardware.  And what we will have is network managers nightmare with multiple user profiles on multiple levels of machines, on multiple OSs.   Universal Windows management and profile tool vendors are what will clean all this up.
Cancel
Solutions liek AppSense, Script Start ProfileUnity, and RTO, will be the tie that binds.  Citrix's UPM will apply only to Citrix customers and one thing we've seen since the early 90s is that Citrix solutions are cool but they do not run the world, it is only one way to do things.
Cancel

Brian, you say that there are only four basic capabilities required for VDI to become mainstream and that VDI will replace almost all Published Desktop using Terminal Services or Xenapp, but:


- If we have a remote display protocol that is indistinguishable from local, you can use this protocol even with Terminal Services, so why use VDI and not Terminal Services? Why spend more money in hardware and licenses, why complicate your Infrastructure?


- You can use a single image for VDI desktops, yes that's true but you can use one single image even for Citrix XenApp servers.


- To use a single Image you need to lock down the user and correctly configure user profiles, scripts and policies, so even there where's the advantage to use VDI and not Terminal Services with XenApp? Where is the adantage for users?


- You say that another problem is regarding broader compatibility for app virtualization, but even there app compatibility is one of the problems with terminal services, and app virtualization works even better with terminal services than with VDI, so probably you will solve all current problems with apps and terminal services. Even there why use VDI and not Terminal Services, where's the advantage?


 In conclusion even solving this 4 problems I see a lot of advantages for VDI but many of this advantages and new tecnologies are applicable to Terminal Services and many of the current limits of Terminal Services will be solved by June 2010 so why not continue to publish apps and desktops for connected users with Terminal Services and use VDI only for for offline users or where you will reach some limits of Terminal Services?


I think that your prediction is not fully correct, probably we will see more VDI desktops than today and probably VDI desktops will reach but not replace the number of Terminal Server desktops. We are comparing a technology that is stable and consolidated with a new technology, that as today is not usable for large production environments. I work with Citrix products from Winframe 1.5 and it takes a long time for Citrix before to have a SBC solution really usable and manageable on large environments, I think that it will be the same for VDI, and that two years are not so much. Probably we will first see a lot of VDI implementations in SMB and after some years start using it on large production environments.

Cancel

Whilst I agree that VDI will gain traction over the next couple of years, I also have to agree with a couple of the other posts here, that we seem to be swept along by hype at the moment.


Why is VDI so good?  It's a bloated, wasteful technology which requires duplication ond overcommitment of almost every resource imaginable.  Yes, it allow us to centralise all of those difficult to manage desktop devices and reap all of the associated benefits, but when compared to a Terminal Services deployment it's laughable how much resource is wasted. Duplication of 40+ OS's per server, it actually makes me cringe that we think this is cool!!!


IMHO, The big  money are there for the company that provides us with technology which allows us to remove the bloated OS, providing a thin abstraction layer between the app and the hypervisor whilst still maintaining compatibility.


As another poster mentioned, when we fix all of the issues with the remoting protocol, Terminal Services actually becomes are more compelling solution. Take this further and fix all of the 64-bit issues with applications and drivers and we could realistically start to see 500-1000 users per server in a TS environment.


How many servers do you need to host 1000 users in a VDI shop...even with 16 core processors and 256Gb of memory I'm guessing that TS still wins 4:1 in consolidation.


Realistically, even in 5-10 years I still see VDI as a niche solution which complements Terminal Services in a big way.  I fully agree with others that the challenge is managing user data and providing a single cohesive access platform and management structrure for all of these disparate pieces. 


I've dabbled with VDI and have many years experience with TS based solutions, and I'm sure I'll  still be ecommending  a hybrid solutiion in 2010 and way into the future.

Cancel

And before anyone says "and why would I want to host 1000 users on a single server", it is now, and will be even more feasible by 2010 to have a completely redundant architecture for TS. Bullet proof, continuous operation even if a host fails.


Losing huge numbers of users in the event of failure has always been my No1 issues with scaling up.

Cancel

So for truth in advertising, I work for the Provision Networks Division.


The first issue of the shared image is solved inherently with Parallels solution which is based on operating system virtualization rather then hardware virtualization.  The easiest way to think of Virtuozzo technology is as a Terminal Server enivonment with complete isolation of each session.  


The joint solution allows for 2-3 times the density of virtual desktops on a given hardware platform, a reduction in back end storage of a factor of ten, and a deployment cost equivalent to a Terminal Services environment. 


Terminal Services has been and continues to be a great solution, but the elimination of the shared environment, allowing one user to negatively impact multiple users is a huge benefit of VDI, allowing administrators to give end users an environment that does not have to be as locked down.


VDI will be larger than Terminal Services, but not because it displaces the technology, rather becuase it will expand the available pool of end users who can be served by server-based computing.

Cancel
The problem with this solution is the same as the problem with Deduping. It will lower SAN space needs, but it does not solve the biggest problem of VDI. That is management. Today's solutions really just move the management problem back to the DataCenter. You still have to use the same methods and tools that you have always used to manage them. That does not provide a ROI case study for VDI. When you use these two technologies, you basically create user specific images that have to be managed. With Brian's comment above, you manage a single image like Provisioning Server. You make the change on the image and it is changed for 1000s of users. I agree that this is the future of VDI.
Cancel

Parallels Virtuozzo is the obvious solution of choice for many of my current customers.  Do you have any particular questions regarding the technology?

Cancel

If Microsoft would spend more money developing TS the need for VDI wouldnt be there.  The kinds of gadgets being developed for VDI would make much more sense in a TS envinonment.  What about virtual workspaces in TS envirnments where users could install apps and create packages on the fly without effecting the server.  Why do this in VDI when it would be so much better in TS. 


I honestly dont see VDI's scaling enough for the technology to survive in the datacenter.  Until you can share processes between the VDIs it wont scale up.  Oh wait, now we are talking about TS again.


I am excited about all this virtualization buzz because it pumping ideas back into the market, but I think a lot of the ideas are much better suited in\for a TS world.

Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close