If we had the tech to make client hypervisors useful, we wouldn’t need client hypervisors!

Many people (myself included) have been talking up the benefits of bare metal client hypervisors and how they could play a key role in the success of desktop virtualization. Unfortunately we've yet to see client hypervisors enter the mainstream.

Many people (myself included) have been talking up the benefits of bare metal client hypervisors and how they could play a key role in the success of desktop virtualization. Unfortunately we’ve yet to see client hypervisors enter the mainstream. While there have been some smaller releases here and there, the big companies (Citrix/Microsoft/VMware) don’t have anything yet. (Citrix has released a tech preview, Microsoft has said nothing, and VMware talked them up for awhile but has since gone cold.)

And while we argue about the merits of client hypervisors, desktop virtualization is only in use by 2-3m of the 500m corporate desktop users in the world (according to Gartner’s Mark Margevicius at the IT Operations & Management Summit last month). There’s a lot of talk about what it would take to get the other 99.5% of the world’s desktops virtualized, and client hypervisors have been suggested as an enabling technology that could address a big chunk of those.

But over the past few months it’s become clear that client hypervisors aren’t the biggest thing holding back desktop virtualization. Contrary to what I’ve written before, I now believe that even if a “perfect” client hypervisor magically appeared today, we’d still be no closer to virtualizing the “other” 498m users.This is because the complexities keeping those folks physical have nothing to do with how good client hypervisors are.

(For those who haven’t been following this conversation over the past 18 months, this is the point at which client hypervisor vendors will scream, “THAT’S WHAT WE’VE BEEN F***ING TELLING YOU FOR THE PAST YEAR. IT’S NOT ABOUT THE HYPERVISOR, IT’S ABOUT EVERYTHING THAT SURROUNDS IT!”) Okay fine. So now that I get it, let’s explore this notion a bit more...

Building a good client hypervisor is not the same thing as building a useful client hypervisor. A good client hypervisor would have broad hardware support, great performance, a seamless user experience, solid security, etc. A useful client hypervisor would allow dynamic desktop composition, seamless offline and online flow, and the general ability for a user to work in any context and on any hardware. A useful client hypervisor has to be good, but a good client hypervisor isn’t necessarily useful. (I’m not picking on any vendors here. I think they all know this and they’re all trying to build products that are good and useful.)

Of course the “good versus useful” conversation is not client hypervisor-specific. “Good VDI” provides a good remote user experience and good server utilization, and we have a lot of good VDI solutions today. Unfortunately they’re only “good” for 0.5% of all corporate desktop users. “Useful VDI” would take the tactical stuff we have with Good VDI and add dynamic desktop composition, simple management, great user personalization, multi-modal flow, etc. And while some vendors are creating useful pieces, no single vendor has nailed everything we need for Useful VDI.

Circling back to client hypervisors, we can easily list the characteristics that would make a client hypervisor good and the characteristics that would make it actually useable.

Characteristics of a good client hypervisor

  • Runs on lots of different hardware (old and new)
  • VMs run with native-like performance
  • VMs can access native hardware capabilities (multi-touch trackpads, GPU, fingerprint readers, power state, etc.)
  • Supports full VM encryption, remote wipe, etc.

Characteristics of a useful client hypervisor

  • Data is continuously backed up
  • User environment is continuously replicated
  • Users can flow from central environment to client-based environment
  • Admins create a single disk image that is used for central and local desktops
  • User experience and procedures are the same for central and local desktops
  • Disk image is dynamically composited regardless of whether it’s running locally or centrally
  • Users can swap hardware with no impact

At first glance it’s easy to see that the capabilities from both lists are great. But it’s also easy to see that the “good” list is more tactical, while the “useful” list describes the things that are more meaningful to users and admins. Ultimately this means that no one really cares about the good/tactical stuff, but that instead we just want the useful things.

And looking through the list of useful things, what on that list actually requires a client hypervisor?

Can’t we just manage the client with a traditional image management systems that can already account for differences in hardware while providing native device access? If we had a system which could flow settings and apps and data around, couldn’t we just flow that down to whatever the client is? Aren’t there already systems that encrypt, back up, and sync local data with central stores? If we figure out how to handle user-installed apps and app layering and isolation, wouldn’t that work with or without a hypervisor?

If so, why would we need a bare-metal client hypervisor?

Last week we explored the possibility that VMware may have cancelled their bare metal client hypervisor initiative. Thinking through the actual advantages that are linked to the hypervisor itself versus the advantages you get with the whole ecosystem, canceling a client hypervisor (if true) might not be a bad idea.

Maybe Microsoft is on to something too by focusing the conversation on the Windows instance instead of the client VM container?

Maybe Chetan is right and the client hypervisor really is just a dumb old typewriter with some more electronics.

Or maybe I’m wrong.

*Footnote: My (growing) list of desktop virtualization paradoxes

  • Madden’s VDI versus TS paradox: In order for VDI to make economic sense, you can only use it for situations where TS would also work.
  • Madden’s Offline Paradox: If you can take a VDI instance "offline," then why don't you just always run it offline?
  • Madden’s Client Hypervisor Paradox: If we had all the technology to make client hypervisors work perfectly, we wouldn’t actually need client hypervisors!

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I couldn't agree more!

The job of desktop systems administration, or SBC admin, is all about separating, cleaning and automating the layers -  app delivery, user settings, data, and OS delivery.

If you've already set up a mature environment for managing each of those layers, what's the client-side virtualisation for?  Don't do it just because it's brought to you by the letter V.  (v is the new i is the new e.)


I also agree...

However… most organizations current desktop delivery solutions are based on Windows XP. Some of these imaging technologies are nearing end of life/support leaving many organizations to re-evaluate the way they manage the desktop. Are we going to purchase the next version with Win7 support, etc? Or is there something else that can come into play?

Do we take the legacy approach to the desktop lifecycle or do we think differently? Windows 7 and desktop virtualization as a whole can be seen as the catalyst for change for a lot of organizations who are HAVING to re-evaluate the technologies they use.

Client hypervisors are one way to go, just like VDI, RDS, etc… Perhaps a blended mixture of all these solutions is the way forward for a large percentage of organizations?

It all depends on use case and the organization’s current situation.


Client Hypervisors coalesce multiple desktop/laptop machines onto a single device that can work without network connectivity.

Client Hypervisors currently address a very marginal use case, where a user is required to have more than one desktop/laptop to perform their day-to-day work. AND the user needs to be able to work without network connectivity.

Type 1 client hypervisors simply add a new option for this scenario in addition to the already existing type 2 solutions. The value add of Type 1 is SECURITY. Type 1 can provide a trusted computing platform for running multiple VMs in parallel without the risk of tampering from the host layer that is present in Type 2.

Personally, I've never seen a user who fits this requirements. But I can imagine that in the defense, healthcare and financial sectors, some organizations might have such requirements.

PS: Is the font on your website getting smaller, or are my eyes just getting worse. I had trouble to read my own words while I was typing them .... :(


@Daniel - not sure I understand end of life?

Finally a post that makes sense!

I have spent the last 3 months working on my organizations go forward model.  

It's as follows

SCCM - Desktop and Virtual desktop management

Atlantis ILIO - Enables persistent virtual desktops using localy attached storage for everything except profiles

Symantec SWV - virtual applications for conflict management

Xenapp - deliver 2 legacy call center applications which are to chatty

Appsense- profile management

No client side Hyervisor!

This model will allow us to deploy 50% virtual desktops at near the same cost to acquire and manage.


Call me stupid but I still don't get it!?!?  Client Hypervisors do NOT resolve easy access to centralized data and applications.

The OS is going to become irrelevant in the future so why the H3LL do we care to virtualize it.

Its just an extra layer of nonsense to deal with...


Wow, is Brian Madden actually posting ideas from a pragmatist point of view? Impressive.

I agree with this post.

This is why our company, RingCube, has been and will continue to work closely with our key technology partners (Intel, Microsoft, McAfee) who share a similar view of how desktop virtualization will evolve and present itself in corporate deployments.

Type 1 client hypervisors have merit academically / theoretically. Unfortunately, they have been over-hyped, over-promised and under-delivered. From a practical view point, there are other virtualization technologies that deliver more value with less downsides that are client/desktop-centric.

This is a very big market which means there will be several winners. When you look around, there are really interesting, useful approaches to desktop virtualization that have zero dependency on a hypervisor.

We are in an exciting time in the development and growth of desktop virtualization. Pundits, bloggers, and technology enthusiasts can create hype and awareness but not a real market. Real-world customers trying to solve real-world problems will be the ultimate judge of what wins. So far, few could point to client hypervisors as a winner in this market.

Personally, I'm pleased to see Brian Madden sharing a pragmatists view point that aligns with how customers are looking at desktop/client virtualization.  Hopefully this viewpoint is a lasting one.


@dougdooley, oh, so now that I post something that's supportive of your approach, suddenly you see me as a pragmatist? :) Little victories I guess....

Maybe there's a meta comment here about the state of the client hypervisor hype machine if I have to be pragmatic to be shocking. ;)


I am sure the consensus is that the usefullness isn't about the hypervisor, it's about the management of it.

Which is why the hypervisor should be given away for free, to infest the community with the type-1 layer from the manufacturer. That way, it's easier to offer/extend the management capabilities for it.

I have to respectively disagree with the end of this article, stating that a client hypervisor is not needed is one thing, but stating that it is irrelevant is another.

Essentially the client hypervisor is an extension of the workings of a server hypervisor. In the years to come it will provide an elegant way to extend SBC and is an essential building block to provide central management of os/app/data to local computing, also it provides the most optimal performance of a local virtual environment.

For VDI to become mainstream, developers need to consider client computing as an option if desired (which definitely will be for offline but optionally in the future might be for over the LAN and WAN).

The use cases for a type-1 client hypervisor go far behond just working on a laptop without network connectivity. It will provide the most optimal way to extend your computing environment to the client from the server to manage the os/app/data centrally with a local execution. It provides the best performance, management, security of a local virtual desktop experience.

You can obviously build a solution without a client hypervisor, currently the flexibility it offers just isn't there because of the infancy the technology is. Therefore if you build a solution without it right now you're not missing anything life shattering.

The windows OS will never become irrelevant it will just evolve, it is the work environment that you use. There will always be a GUI or some sort of UI you will need to interact with your apps/data. The OS is essentially another app.

The OS today is a bloated work space used to connect us with the internet, apps, and data. MS has to dedicate some hard work into designing it for the computing of the future. Regardless of what some people believe, they have been doing an alright job so far considering they are a monopoly.

Bottom line: The problem with traditional desktops is that the desired management of all of the instances is not there. We need to focus all of the efforts on the "plumbing" to ensure that we can centrally manage the workspace(os)/apps/data in a server/client environment that we live in. Our goal is that want access to everything, everywhere, at any time with the same look/feel so we have to focus on what technologies we can utilize to get this across the board and manage it centrally.

This is the goal of a type-1 client/server hypervisor, as with all other current and future VDI technologies.


I totally agree with @icelus.

There is way more to Type-1 hypervisors than most people are even thinking today.

This would allow for example for you when you get home and need to render a video, to send your 'laptop' VM to some provider in the cloud that would change your VM settings to go for 32GB RAM and 8 vCPUs so you render the video 10x faster than on your 'PC'.

Or what about what I call 'a living datacenter' when your VM, once in the datacenter, could be moved seamlessly to datacenters around the globe where power is cheaper at any given time? So the VM follows the 'sun' (in an opposite way as power is normally cheaper at night)?

There are several other examples. Many actually. Some under NDA as we are actually working on them. :-)

Once Type 1 hypervisors become embedded on every computing device, years from now, people would not even care/remember how it was in the past. The ability to run your 'computing system' on any device, any form factor, with different processing power/characteristics is here to stay and Type-1, once perfected, can give you that.


I completely agree it's all about management. A client hypervisor makes management easier (less drivers to worry about) but doesn't actually solve the management problem itself. It's still a Windows desktop - it still needs patches, device drivers for other devices (printers, scanners, mobile devices), apps with kernel mode components, etc... Not everything can be virtualized into discrete layers and streamed.

On a different note, I didn't see much about security in this piece, which is a critical piece to doing "offline VDI". If my entire desktop is now in a highly portable file, encryption and protection are everything. Which is, again, a management function.

So, again, management is everything, with or without client hypervisors. And I *think* that was Brian's point :o)


indeed, management is the key for IT pros and I appreciate Jon's comment regarding security. To be transparent, I work for Virtual Computer and we are deploying our client hypervisor / management console, NxTop, in production today with many different use cases. There is a lot of conjecture around desktop virtualization, but the reality is entperprises have many more needs than the "useful" list by Brian - including the end user experience. Type 2 sits on top of the OS and cannot deliver the security of bare metal and pure VDI does not address mobility or disconnected requirements for end users. From a management perspective, though, the client hypervisor provides distributed computing from a central console and can add value to VDI tools. We do not preach one size fits all / client hypervisor or VDI. Quite the contrary, we have existing customers who are innovative enough to recoginze the real value in an end to end solution incorporating multiple facets of desktop virtualization. is a client hypervisor needed - well, our existing customers are all quite happy having it as part of their solution for efficient and cost-effective desktop virtualization management.    


Also, in reference to why MS isn't interested in type-1 client hypervisors.

My take is that since the purpose of client hypervisors is to extend the benefits of SBC execution to local execution, what does MS have to extend? RDS?

MS has chosen an extremely simple approach with RDS, then using Citrix to enhance and deliver a more complex environment. They choose not to care about client hypervisors because it's Citrix's problem to tackle.

Hyper-V was only created to take market share from VMware, If Citrix could have done it alone I bet you Hyper-V wouldn't be around.

Which now brings me to VMware. They have their own SBC solution, why would they want client hypervisors but then change their mind and choose not to extend it to the client?

IMO they had an extremely hard time developing a client hypervisor BECAUSE of their vision. If they want everything SBC except for only offline why would they contradict themselves and allow  the choice of SBC and local (CBC)? Why would you ever sync it back to the datacenter? Contradicting indeed...

Let me ask you a question, when choosing SBC do you choose it because EXECUTING it on the server would be better than EXECUTING it on the client? Or do you choose it because MANAGING it on the server is better than MANAGING it on the client?

IF it was just as easy to MANAGE client based computing as SBC, would you still prefer executing everything on the server or on the client?

NOW, think of the use cases for SBC? Limited: Yes. Still usefull: Absolutely.

I keep re-iterating this, but the central management of Server Based Computing and Client Based Computing will offer the BEST choice for any enterprise.

What everyone thinks about client hypervisors, including myself is actually irrelevant. What enterprises choose now is actually irrelevant as well, because they are not the majority, and the desktop virtualization technologies are still at it's infancy because the so called "plumbing" is not worked out yet.

The "plumbing" is actually the most important piece of the puzzle, I would prefer it to be thought out and done elegantly, properly, and practical otherwise I might get my toilet water from my tap.

FYI - I have no relation to any Desktop Virtualization company therefore my opinions are not biased towards any method. My biased opinions are created on my own.


How does this debate change when in, 24 months perhaps, we have 99.99% network connectivity, much like we have 99.99% dial tone on phone networks today?

Does google actually conquer the world, find us always "connected" and deliver most of what we need from a browser, and on a FREE, open OS ?  Or, are there just large TS or VDI "farms" where we buy "desktop" like we pay an electric bill?

Is the desktop a NOUN that we BUY, or a VERB that we DO ?

That's what keeps me up at night. Well, that and re-runs of Law & Order.



Brian, I think you raise some interesting points here, though I am not sure why putting client hypervisors on trial is the focus of the piece.  The real point (much of which you ultimately covered) is figuring out what capabilities are needed to build a next generation desktop management framework that will save IT folks time, be more friendly to end-users, and improve data security.  At Virtual Computer, we determined that a client hypervisor is an enabling technology that, when combined with the right management tools, can accomplish this.  We think it is a pretty good way, but it is certainly not the only way.  That said, we haven’t seen a better one emerge yet.

We will be the first to be the first to admit that very little of what we are doing with NxTop cannot be accomplished already with existing tools.  However, what we have found is that by taking much of the management and security out of the OS and running it at the hypervisor level, much of it can be done much more efficiently, elegantly, and reliably than traditional approaches.  Full disk encryption and data backup products exist.  How many people use them?  Tools for patching PCs and dealing with driver/platform variability--good ones--have existed for years.  Yet, if you surveyed 100 IT folks and 100 end-users, how many would tell you that keeping systems patched and migrating users across hardware platforms is a piece of cake for everyone involved.

There is a level of time investment required to adopt a client hypervisor technology, but once you do the age-old IT challenges I mention above--and many more--get orders of magnitude less complex.

If other vendors can accomplish the same objective without a client hypervisor, I say go for it.  There will undoubtedly be some benefits to a non-hypervisor approach.  However, there are certain aspects of a client hypervisor-based approach that are going to be tough to equal.  Some examples include:

- You effectively get hardware portability “for free.”  Sure, there are ways to juggle drivers inside of the image or as layers to address image portability.  But isn’t eliminating the need for the IT staff to worry about it at all simpler and more elegant?  

- Resiliency and recoverability.  By managing the desktop from a virtualization layer (be it on a server or a client hypervisor), your management endpoint survives a catastrophic failure such as a BSOD.  Tools that rely solely on making Windows do things it wasn’t designed to are more likely to *cause* a BSOD.

- Multi-OS execution.  I actually didn’t think “workstation consolidation” was going to be a big use cases when we started, but surprisingly it is.  Between isolating work from personal, isolating local sessions from server-based sessions, application compatibility, multiple desktops with varying security policies, developer sandbox environments, and so on, there is a lot of interesting stuff here that many organizations will find useful even if it isn’t necessary for all users.

- A managed endpoint solution that is ready for anything.  Unlike Chetan, I don’t think Windows at the endpoint is going anywhere anytime soon.  However, if you try to engineer a management solution that is entirely based on making Windows do unnatural acts, you are probably going to be missing a pretty big boat as the desktop evolves in the coming years.  With a client hypervisor-based approach, if Windows remains the dominant model, you have a better management approach than you did in the past.  Should viable and useful alternatives to locally executed Windows emerge (e.g., VDI, Terminal Services, Chrome OS, other Linux distros, WebOS?, etc.), the client hypervisor is a Swiss Army knife that can bridge these different models on a single, fully-managed platform.  You don’t have to guess what will ultimately work best.  Run one or run several.  Mix and match.  Add and remove.

When I review your characteristics of a “good” and “useful” client hypervisor, I honestly believe we are closer than any other vendor in the marketplace.  We have pretty much all of it with the exception of an integration model with VDI.  Given that we already have a management solution that creates, executes, and manages VMs on a Hyper-V server, we are one good partnership away from filling in that piece of the puzzle.  Stay tuned.

Doug Lane

Virtual Computer, Inc.


@t.rex - With all due respect there are just too many variables to consider to ensure a remote WAN connection to an entire SBC environment will be as good as a local experience.

Even if one day a solution is possible, there is an extremely diverse ecosystem of differnet client devices that can be tapped for computing which would just be wasted.

The most elegant approach is to understand the benefits of both local and remote execution and have a central management solution in place to dynamically assemble/manage/secure your work environment regardless of:

- Where you are

- Whether you're offline or online

- What device you are using

- When you might need access

To ensure the work environment (workspace-OS/apps/data) can be accessed with the best performance, security, and management that is offered.

Whether it's Server Based or Client Based those shouldn't matter, what only matters is the system management and user experience. That is the only thing you can measure as value to an enterprise.


The real benefit of a client hypervisor is providing a standardized hardware interface across a range of devices so you can do single-image management.  Claudio and Doug both hit on this point, a client hypervisor provides a layer of abstraction so you can manage one image and use it on a range of different hardware.  That plus out-of-band management (the ability to manage the whole environment independent of the content) are the reasons we (MokaFive) are using a client hypervisor today - it provides a nice platform for single-image and out-of-band management.

Brian is right that most of the actual benefits - security, encryption, backup/sync, updates - are all about management and do not strictly require a client hypervisor, although they become much easier and more secure with a client hypervisor.  (The one exception is you cannot achieve cross-platform in any real way - e.g. corporate Windows on a personal Mac - without a hypervisor.)  The client hypervisor technology piece is very boring in and of itself - the interesting part is how you can leverage that technology to more effectively manage the desktop.

John Whaley

CTO, MokaFive


For all those who say that a client hypervisor "white washes" the hardware so that any disk image can run on any client... is that true?

Sure it's true for server virt.. You can run the same disk image on VMware workstation running on an Intel box that you can run on an HP AMD server. But for the client, I dunno.. I mean if you have the ultimate portability, then you "lose" the real hardware, like power status, GPU access, multi-touch pointers, fingerprint readers, etc.

If you paravirtualize these, now the hypervisor vendors have to start picking and choosing what they'll support.

And if you simply pass this hardware through, now you're back to the same problem as the physical world where you need to manage different sets of drivers for different clients, which means you only get "portability" between a narrow hardware set.

Of course there are solutions that manage different driver sets for different clients, but if you use those, then why do you need a hypervisor? (Which is the whole point of this article in the first place.)


So, I am writing this from a Windows 7 VM running on NxTop Engine on an HP 6730b laptop.  My touchpad works, including multi-touch.  I can even move my finger up and down the specially marked area on the right side of the touchpad for scrolling.  If I look down in the Windows system tray, I can see how much battery power I have left just like native Windows.  If I plug in my power adaptor, status changes accordingly.  When I notice that I am not running native, it is often because something is working better than native.  Roaming between wired/wireless networks and suspend/resume are a couple of examples that come to mind.

Graphics.  Windows 7 Aero is turned off.  We don’t try to explain it away by saying nobody cares.  There is actually some useful stuff in there with Win7, and we need to support it.  We will, but not at the expense of image portability.  Some Type-2 products already do it in virtualization.  We are going down a similar path.  At Citrix Synergy, we were demonstrating 3D in multiple concurrent VMs live in our booth.

If you are a gamer or a CAD professional who is trying eek every last bit of performance out of the GPU, you are probably not a candidate for a client hypervisor until GPUs become more virtualization aware in hardware.  However, mainstream corporate PC users are not going to feel like they are missing anything with a client hypervisor, particularly in light of the fact that there are gaining major benefits in other areas.  Even today, our graphics are really friggin’ good.  I just went to YouTube and watched the Toy Story 3 trailer in full screen 720p HD, and it looked fantastic.  Is the CEO going to care that he doesn’t have “direct access” to the GPU (or even know what that means) if his kid can watch Toy Story in HD?

In terms of breadth of platform support, we are a few months away from making it a total non-issue.  We already run on pretty much any system with an Intel VT-x or VT-d enabled processor and either Intel or NVIDIA graphics.  Our next major release (targeted for late July beta / Sept. GA) will add AMD processor support and ATI graphics.  What's the limitation at that point?

We are not going to pick and choose.  We are going to run on anything our customers need us to run on so long as it has a processor with virtualization hardware extensions.  When we engage with a customer, we don’t actually point them to a list of platforms we have anointed as client hypervisor worthy.  We have them download and run a tool (Try it: https://bit.ly/cUgZDo) on their PC, and if we aren’t already compatible, we figure out a way to get compatible.  We take this very seriously.  Literally, if someone (even one of our free users) runs the tool and it comes back not compatible, our whole management team gets an e-mail because we want to know why.

But none of this matters because it doesn’t say Citrix or VMware on our door, right?  ;)


Brian: If you are doing hardware passthrough on a variety of devices, you lose a lot of the benefit of single-image management, so there is not a lot of point to having a client hypervisor.  If you are running on all standardized hardware without any intention to diversify, again there are other solutions and you don't strictly need a client hypervisor.

But like Doug mentioned, client hypervisors provide an abstracted view of all hardware you would care to use.  In a type-2 world, the drivers are provided by the host OS; in the type-1 world, it uses paravirtualized drivers and/or dom0 drivers.  If you look at VMMs today (e.g. VMware or Parallels) they already provide very good support for abstracting GPU (3D), power (battery), smartcards, USB, network, etc.  Sure, there is a performance hit to virtualizing devices, just like there is a performance hit to virtualizing server workloads.  But today most server deployments are virtualized because the improved manageability trumps the (small and shrinking) performance hit.  I think the same will happen on the desktop, because the management problems are even bigger on the desktop than they are on the server.

John Whaley

CTO, MokaFive


I have not researched Ringcube enough to fully understand it. At first glance it looks like a great option for bring your own PC.

Anyone using it?


Bill Gates and Steve Jobs have a fairly famous session they did at D5 in 2007 and I tell you what, this debate is not new.


Three screens in our lives -

a small one which we use to communicate.

a medium one we use to do work

a large one we use to entertain

Now, i can use one or the other to do the others, but, they are not ideal. And the end state these guys see, is a blend of local, rich, robust compute + cloud services. Yes, they used the word cloud in 2007.

So, to be honest, i think we all are saying the same thing in regard to Client Hypervisors, VDI, or SBC....and what we are saying is YES. All of the above.

Now, the intelligence to know when, how, and how much of each based on what we need to do...that is software i would like to see ;)



@t.rex - thx for sharing the D5 interview segment. I actually think segment #3 is also very interesting about "rich local" with "helpful cloud" and the 3 "natural form factors" screens (small, med, big): d5.allthingsd.com/.../video-steve-jobs-and-bill-gates-together-part-3-of-7

Like you, I'm in agreement with Jobs & Gates about the "3 natural form factors."  What's also fun to see in D5 is  that Steve Jobs says "You know, it’s interesting. The PC has proved to be very resilient because, as Bill said earlier, I mean, the death of the PC has been predicted every few years.... I think the PC is going to continue...be something that most people have, at least in this society."

Ironically, a few years later at the most recent D conference after he launches the iPad , Steve Jobs joins in the "death of the PC predictions" with his "PCs are trucks" and "most people need cars not trucks unless you live on a farm." Just a tad self-serving no?

@Watson - if you are interested in RingCube. I recommend attending our next live webinar & demo on July 14 at 10am PDT. The topic will be focused heavily on Windows 7 and Mobile Workers. There are over 140 registered attendees already but, if you desire, you will have an opportunity to interact 1-on-1 with folks at RingCube in a workshop following the webinar: www.ringcube.com/.../events_and_webinars.html

Hope this helps,

Doug Dooley

VP of Product Mgmt

RingCube Technologies


Some of these were touched on but I've always viewed the advantages of client hypervisors being all the things they could enable you to do outside the OS and even remotely... so if windows won't boot or is having serious problems...what can the user do, what can IT do remotely?  With a hypervisor it seems there are a number of new valuable options.

things like snapshotting the whole VM so you can easily back out undesirable changes (and perhaps other backup/recovery that could occur outside the guest)

possibly other security functions in the hypervisor such as AV and firewall.  Or eventually using VM "record" capability to analyze system changes for troubleshooting or security analysis

there has been some talk about multiple VM's as a solution to locking down corporate images ...like a personal and corporate desktop but personally I think that is an ugly and confusing solution for most users (trying to figure out which window/desktop to do what in, switching back and forth)  ...but with the right integration I could see sandboxing some apps in an isolated / disposable VM but with some desktop integration so it looks like all one desktop...which might be something even a 3rd VM session does... just act as a desktop/window manager while the actual work is performed in isolated VM's.

I could see a scenario where you might even be able upgrade a remote laptop users from XP to Win7 remotely with a client hypervisor (and of course all the other layering stuff you talked about) background download the new image using idle bandwidth, when it's all there...one way or another extract or layer all the user "personality" and apply that layer to the Win7 gold master next time they log in...oh and if there are problems, switch right back.

In a desktop environment where you aren't dealing with mobility issues I think there are even more eventual uses possible like cycle scavenging - move batch or HPC workloads to desktops with idle resources as VM's when the user has gone home...ultimately maybe even treat desktops as a pool of machines and v-motion running sessions around if the user moves or even just for power management... vmotion to a datacenter blade and put their desktop machine to sleep but their session is running and available for remote access

Ok, some of these may be a little out there and I realize I oversimplified a lot... just wanted to get some counter arguments out there...


Brian. I agree (not surprisingly) with the premise that a hypervisor isn’t necessary to get centralized image management, data protection, and good user experience -- along with and the other criteria you list in your post. In fact – you did a demo of one such solution a few weeks ago…  

See www.brianmadden.com/.../video-demo-of-wanova-s-offline-laptop-disk-management-thing-it-s-really-cool.aspx.


It is a common understanding that the hypervisor creates separation and isolation from the underlying hardware and the VMs, which has it's benefits.

However, another argument can be made which is that the VM is becomming a new standard of a container of the OS and vendors see this which is why they are now developing VM specific solutions.

One example that comes to my mind is McAfee's A/V solution with XenDesktop. The capability of separating workloads such as applications, a/v scanning, os patching, data backup, etc. from the VM that can be on a client to a VM on a server provides the most robust solution out there.

I actually would refute Chetan's view on how client hypervisors are just an electronic typerwriter. In fact, anything other than a solution that doesn't embrace and enhance VMs are just electronic typerwriters.