A deeper look at VMware's upcoming bare-metal client hypervisor

At VMworld last month, VMware announced several future capabilities of their VDI product, including a bare-metal client-side hypervisor. The idea behind this is that if you have a hypervisor running locally on a client device, you can get the "best of both worlds," combining the centralized management of VDI and performance and flexibility of local compting.

At VMworld last month, VMware announced several future capabilities of their VDI product, including a bare-metal client-side hypervisor. The idea behind this is that if you have a hypervisor running locally on a client device, you can get the "best of both worlds," combining the centralized management of VDI and performance and flexibility of local compting. Having a local / offline capability will be an important feature of future VDI environments. (Check out Recommendation #3 in the 2010 VDI+ vision blog post.)

There are several advantages to running a hypervisor on a client device:

  • The hypervisor provides generic hardware to the VM, so a single disk image can be used on very different types of devices.
  • Since the VM is running locally, it works offline, and you don't have to worry about thin client remote display protocols.

Of course products like VMware ACE and Microsoft Enterprise Desktop Virtualization (Kidaro) have allowed users to run VMs on their clients for years. But those products were "Type 2" hypervisors, (or Virtual Machine Monitors), which installed on top of an existing OS like a regular application. VMware announced that they will release something that's more like a "Type 1" hypervisor, where the hypervisor itself is the actual OS (like ESX Server). Type 1 hypervisors are typically viewed as having better performance and just being more of a "real" solution in general. On the server side, Type 1 hypervisors are absolutely dominating the market. On the client side, however, there aren't any Type 1 hypervisors in use.

Why would you want a Type 1 hypervisor on a client?

VDI is all about management, especially when you're talking about client-based VDI. Everyone reading this can understand the value proposition of being able to create a single disk image that could be used by multiple users. And everyone reading this can understand that use cases exist where you'd want the disk image to run locally on a client device. So how can we combine these two?

Citrix Provisioning Server (Ardence) showed a lot of early promise. With Provisioning Server, you can create a single disk image that's shared by hundreds or even thousands of users. This works perfectly as long as the client devices that you're deploying this image to are identical. (Or at least identical enough to use the same image.) But what if you have a several different types of client devices? This means that you need to build several different images, and it means that you won't be able to support any random client device.

"No problem," people think, "We'll just throw a hypervisor on the client devices and use Provisioning Server with a VM on the client instead of the bare-metal client." While this is a simple theory, it's not so easy in practice. Existing Type 2 client-based hypervisors require an underlying base OS. So how do you manage, deploy, maintain, and patch that OS? And if you're taking the time to manage that, then why even bother with the VDI instance? If you think about it, managing a local OS on a client device while also managing a VDI OS is actually the worst of both worlds. Now you're managing two OSes per user instead of one.

This is the exact problem that a Type 1 bare-metal client hypervisor can solve. In this scenario, there's only one OS to manage. (This is the point at which purists shout, "Hey! The Type 1 hypervisor is an OS, so you're still managing two OSes." I guess that's technically true, but the Type 1 hypervisor is much easier to manage than a "real" OS, and the general idea is that it would be transparent. Even though a Type 1 hypervisor is technically a piece of software, it can be managed as if it's an extension of hardware.

It's probably worth mentioning that Type 2 client hypervisors will continue to exist even once Type 1 hypervisors for client devices are released, because each type has its own use case. Type 1 hypervisors are great for when you want to "replace" (or provide the only OS) that's used on a client. They're great for when you want a user to turn on a machine and only see a single OS that looks and feels local.

Type 2 hypervisors are (and will continue to be) great when you want a user to have access to their own local desktop OS in addition to the centrally-managed corporate VDI OS. This could be for an employee-owned PC scenario, or it could be a situation where you have contractors, etc., who need access to their stuff and your stuff.

Perhaps some day will see a hybrid hypervisor that combines the best of Type 1 and Type hypervisors. Maybe you could have a Type 1 hypervisor that boots a disk image from a USB stick, but that can also (at the same time) boot the contents of the client's physical hard drive into a second VM which is then accessed from within the first primary VM. Sound tricky? Maybe, but VMware is already doing something similar with their Fusion product. (Fusion is like VMware Workstation for Mac.) Since Macs can run Windows now, a lot of people use something called "Boot Camp" to configure their Macs to be able to dual boot between Windows and Mac. With the new version of Fusion, you can boot to the Mac OS, run Fusion, and then boot up your Windows partition live in a VM while running the Mac OS. (And this is non-destructive. You can still then reboot your Mac and boot into the Windows partition natively.)

Challenges of Type 1 client hypervisors

There's a reason that Type 1 hypervisors have existed for five years in the datacenter while they're only just now coming out for client devices, and that's because building a Type 1 client hypervisor is actually really hard! It's not as simple as just installing a server hypervisor on a laptop. A Type 1 hypervisor running on a server is built to host multiple VMs, and the design goals center around making each VM seem like a "real" server on the network. A client-side Type 1 hypervisor would have a completely different goal, mainly, that the VM running on the client should "feel" like a normal local computer. In fact, the user probably shouldn't even know that a hypervisor is there or that they're running a VM.

That said, consider the following challenges that a Type 1 hypervisor running on a client device would face:

  • Hardware compatibility. In the grand scheme of things, there aren't that many different server models in the world. Since a Type 1 hypervisor is the actual OS, it needs to support (with drivers, etc.) whatever hardware it's installed on. And there are probably, what, 50-times more laptop and desktop models in the world than servers?
  • Local graphical performance is important on a client device, so the hypervisor needs to make sure it exposes the local GPU and graphics capabilities to the VM.
  • If the client device is a laptop, then the VM running needs to know that it's a laptop. This means that the hypervisor needs to expose the battery and power states to the VM, it needs to expose the power savings and CPU speed stepping technology, and it needs to expose the suspend / resume / hybernate states.
  • USB ports and devices must be passed-through perfectly to guest VM.

VMware' client hypervisor plans

Taking all that into consideration, VMware announced that they would release a "bare metal" hypervisor for client devices. At this point we don't have a ton of details, but between the press release, the onsite VMworld lab, and talking to VMware employees, I think we can get a pretty good sense of what they want to do. This is part of their larger "vClient" initiative, which is their marketing way of talking about how a user's desktop could follow them to any location.

The most important point about VMware's client hypervisor is what it's not. VMware's client hypervisor is not some sort of "ESX for desktops." ESX has been designed, built, and refined to be a server hypervisor, and the specific requirements of a client-based hypervisor are completely different. (Think "graphic performance" versus "network performance," etc.) Sure, there are elements and know-how in ESX that can be used as a foundation for a client-side hypervisor, but it's not a straight port of the product.

In addition to ESX, VMware was also able to draw from products like ACE (a centrally-managed Type 2 hypervisor) and VMware Workstation when designing the client hypervisor.

From a pure technical standpoint, it appears that VMware's client hypervisor will be something like VMware workstation for Linux running on some version of Linux with the central management of something like ACE. I asked some folks from the desktop team point-blank whether their client hypervisor was really a Type 1 hypervisor, or a Type 2 hypervisor running on Linux, and they did admit that they built it on are Linux capabilities, but they stressed that they want to downplay that.

They don't really want to mention that it's a hypervisor running on Linux because they don't want people to think that it will be a big problem, or that you have to install Linux first and then install their hypervisor. They want people to know that it's a single "thing" you install, and whether or not that thing includes some Linux code shouldn't matter to the user.

On a side note, while I agree with this 100%, I think it's ironic that VMware attacks KVM by saying that since it runs on Linux, it's extra complex and you need to install Linux first, and now all of the sudden that they have a solution like this, they hide it and say it's no problem. Because with KVM, it's also no problem. You pop in the DVD and follow the prompts. The fact that it runs on Linux is also 100% transparent to the user.)

But for VMware's client hypervisor product, starting from Linux certainly makes sense. Right off the bat you've got a huge device compatibility list, and you've got a desktop OS that understands batteries, GPUs, and the other essential elements of a desktop OS.

It's too early to know what devices VMware will officially support with their client hypervisor. Right now they're looking at having a focused group of certified devices, but it will be possible to install this thing on other devices too. They're thinking that they will support several deployment options, like locally-installed on disk, USB-stick based, and even some kind of embedded "ESXi-like" capability.

The competition

As I wrote towards the beginning of this article, there aren't any mainstream Type 1 client hypervisors. When talking to the VMware folks at VMworld, they repeated several times that they've put a lot of work into this, and they feel that they're really ahead of the competition.

In the Type 1 client hypervisor space, there are a few other companies or products to consider:

Of course there are a lot of non-bare metal Type 2 hypervisor products out there too, like Microsoft Virtual PC / Kidaro, VMware ACE, MokaFive, and RingCube.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

You can inject multiple hardware drivers into it, so you can have different nics, video cards, etc.

Also higher versions of Xen Desktop have Provisioning to endpoints, or provisioning to vms, and a license for Xenapp on them to customize disk images for different roles in your organization.


Oh yeah, I know you can inject multiple driver sets, but that's the problem--the fact that you have to do that. If you throw a hypervisor on the endpoint, then you don't have to do any of that. Your single VM image just "works" right away without any up front effot.

I think the driver overloading of Provisioning Server is cool for servers where you have a known quntity of different models.. but if you tried to use that for a bunch of unknown desktops... it'd be impossible!

What are your thoughts on this? It sounds like a dream come true? One 1 image for a notebook and 1 for desktops. Its provisioned and has everything broken out cleanly. From the Applications to the Users Files. Now if a user gets a virus or lost of work, or the machine is working weird, they can do a roll back restore or what ever. I like you can take a single image on any endpoint device.
Great news for OS manufacturers, in particular Microsoft and the various Linux's, as this now pushes the hardware compatability issue to the client hypervisor. Ideally it should be the hardware manufacturer, not VMware, Citrix or others, who provides the client (or server)hypervisor and they take responsibility for their hardware.
NxTop is actually the product name of the company called "Virtual Computer" that I referred to. It's based on the Xen Client Initiative stuff with some management capabilities thrown in. (It's also very similar to Neocleus' endpoint virtualization product.) So yeah, I love these things and as this articles outlines, they will be a key part of VDI+ environments moving forward.
It's actually a horrible news for Microsoft as with Client hypervisor they lose control of the desktop and now there is a middle tier between them and the HW manufacturers.

It seems that the NxTop and the Neocleus are very similar. However, the Nxtop offers the management side, and the Neocleus gives the VM to run in a local container environment.

How can you tell if your existing notebooks and desktops are a Type 1 or Type 2 Hypervisor?

How can you tell if it's type 1 or type 2? I would say it is type 2 if the user has access to the underlying O/S (i.e. the host O/S in that case) which therefore needs to be managed by the administrator. It is type 1 if the underlying "O/S" is invisible and unreachable for the enduser and therefor can be left more or less unmanaged by the administrator.
I doubt this will ever happen. It's very, very difficult to create drivers for all the hardware out there, even for Microsoft - see all the problems they had with Vista. This is one of the reasons Linux is still not so popular on the client-side, and Apple isn't even trying.
Brian, while it's a lovely concept, I doubt it will happen. For a hypervisor to create such a "common platform" it will need to emulate the behavior of a certain set of hardware devices on top of the actual devices available in the client. I don't think such emulation can be performed without a substantial performance hit. Also, it would require the hypervisor to directly support all the myriad devices out there - which would make it as difficult to manage as any desktop OS.

I disagree - with the advent of multi-core, x64 client side CPU's, I believe that the performance hit would be similar that of the early (2.x) days of ESX.

VMWare has done a good job of partnering with the Big 3 (IBM, HP, Dell) to support their product lines, in turn covering a fair amount of ground in the Server space, and it's reasonable to assume that they could extend this relationship further to these vendors' client devices offerings.

It would be a bold new world, but drastically change the way we look at rolling out SOE's!

Brian touched on this but the key difference with a between a server hypervisor and a client hypervisor is device drivers – everything from power management to graphics subsystems. The base level drivers in Windows cannot use the additional hardware capabilities of individual chip and hardware vendors. We don’t want to deal with maintaining separate images for each combination of hardware drivers we have across all the machines we are running. Hence we need to have a way of keeping the image and deal with all the hardware variants while not losing functionality. It is too early to understand quite how VMware will handle this but the model used in HyperV is an interesting pointer. HyperV has a concept called ‘enlightenment’ which you could think of as a form of paravirtualization (the guest operating system is aware that it is virtualized in some way). What happens is the parent partition contains the hardware specific device drivers and is responsible for managing use of the hardware, the child/guest VMs are aware that they are virtualized and know to redirect relevant hardware related calls to the drivers in the parent. Interestingly, Vista and SUSE Linux already have this enlightenment capability built in already.  Quite how well this would pan out in a client hypervisor is unknown – HyperV is a server hypervisor anyhow  and there have to be concerns that existing drivers would not ‘just work’ for a whole pile of reasons. But it is an interesting model because all the hardware specific code is in the parent partition – essentially the hypervisor and the parent partition could be provided by the hardware vendor with all the right drivers and we would then be able push standard client images out to any machine – our images would not need to care about hardware variants and we get the best of both worlds.  We are a good way away from being able to do this on a client right now and we do not know how VMware will choose to handle the issue of drivers, but being able to keep the hardware specifics with the hardware and separate from the client OS looks like a good way to go...  Martin IngramAppSense


I noticed in Microsoft's announcement on Hyper-V Server, which I was hoping to use for this purpose on notebooks, requires 1GB Ram reservered for the Hypervisor.  Do you have any info on what VMware's solution uses?


OH YES! Client Baremetal hypervisors are coming!  and sooner than you expect.  I thought of the same thing .. MS is probably in one of the best positions to drive this, think of this.. what if MS includeded there HyperV, and built Windows 7 ontop of HyperV ?  Also, wasn't that really what the HAL was supposed to do all along but really failed ?   If you want to turn some heads MS can do that along with every other OS builder that starts installing a Hardware Hypervisor and create there OS in a VM ontop of that


It would be great if somebody like Etay Bogner, CTO of Neocleus, would chime in - I know they have put a lot of thought into exactly this subject. My point is that I highly doubt that a Type 1 hypervisor will be able to virtualize the hardware in such a way that guest OS will not be required to include appropriate drivers for the actual physical devices.

It's a bit amusing how people are talking about virtualization as this New Thing, that has to live in a layer that is distinct from the traditional OS. The thing is that a primary role of the OS is virtualization. If an application that I develop needs to print I don't have to code support for every printer out there - the OS (and the drivers that it hosts) does this for me. I simply use a generic API and the OS takes care of the rest. Likewise the OS creates a virtual CPU for every thread, virtual memory for every process, etc.

I do agree that the virtualization capabilities provided by the current crop of OSs (at least the major ones) are too limited. Having a bare-metal hypervisor running under the OS addresses some of these limitations. I would also like to see OSs themselves evolve to provide greater virtualization capabilities, for example Session Virtualization which I've blogged about in the past:



Hi Tim,

I forwarded your question to VMware, and they said they “expect to do much better than that.” :) Right now they’re shooting for 256MB, but they won’t know for sure until closer to the ship date.

How are you adding multiple nic drivers into the image so that it can be streamed by provisioning servers?

I'm not sure why you continue to stump the driver scenario.  it's a very focused level of thought that really has no bearing on the product itself. 

I don't remember anywhere in any documentation that vmware states that they intend to support myriads of hardware; certainly I don't remember anything in the vision that would include stuff you would get at Best Buy or Circuit City or anything that you can build yourself.  much like Vista Certified stamps of approval, i'm certain that any hardware vendor (ie. hp, dell, ibm, etc) that want's to pander to the enterprise and play up this capability will have a VMware Certified program.  This isn't for the mass consumers and never has been marketed that way ... this is a strict enterprise play and the first run at it will be to BIG enterprises (like ESX was when it was first introduced). 

 The only thing that I would question is the revenue generating capabilitie of the product.  You're still paying for the OS license and the equipment, so that would have to make the hypervisor piece virtually free ... otherwise why buy?  no, I think the revenue will be driven by the management software licenses and the "expanded" capabilities that they will provide to the management and admins.  again, consider ESX as the model and virtual center.


So what you are saying is that in order for a client-side hypervisor to create a standard virtualized environment for the OS it has to run on top of totally standard hardware. How useful is that? And it’s going to have to be transformed into the equivalent of an OS itself, just an OS with very limited hardware support.

Don’t get me wrong, I also thing that Type 1 client-side hypervisors can be very useful, for example for offline VDI, and as a means for securely segregating VMs (Brian's "employee owned PC" scenario). I just highly doubt that it will provide a solution for OS standardization problems, which appears to be a main point of this article.

I think so. In some cases, guest need to add physical device driver for high performance. Especially for graphic device. For VDI offline, it's very sepecial, only one guest running on the type 1 hypervisor. That means we can direct assign the physical graphic device to the guest, with less performance downgrade for 3D or whatever advanced graphic capablitiy. So for high performance, device assignment to guest is a possible solution for this VDI offline, with vt-d/iommu or even identical mapping of guest memory. But the left issue is if direct assigning the device to guest, hardware compatibility is broken between guest image running offline and image in the VDI server. When checking in the image, the hardware for the guest is different from image in the VDI server. How to maintain this smoothly transition?

This really resonates - In some ways we could be looking at a Client based Type 1 Hypervisor taking over the role of what the HAL was intended to do and taking it even further.

In addition to this, from my perspective looking at remote client solutions, I'm getting very intrigued with the possiblity that users could remotely connect in to a *traditional* VDI Solution while in a LAN situation and then as their compute needs increase they could *slide* from that to a BladePC and then to a BladeWS and back - but this would only be possible with something like vMotion once a client based Type 1 hypervisor can be implemented.

If this was actually possible it would allow Users (and Admins?) to seamlessly have access to the compute power that they need as and when neccessary? It sounds fanciful and almost SciFi now but it could be in that in 2 - 3 years this is commonplace?

The hardware need not be present to install the driver.  Boot the image on something, add drivers manually, save image.

Wondering if MSFT will ever create a "client core" capability designed to run a desktop/laptop optimized version Hyper-V.  That would solve the driver calamity.  Then they would just need to adopt true VM-based OS licensing.  If this were the case, you could take a coporate asset and run a personal VM on it or vice versa.  And depending on who owned the asset, policies could be set at the hypervisor layer to control which VMs can talk to each other, what peripherals are allowed, which VMs are mandatory, etc.


No ... actually, I'm saying this is an enterprise play by vmware.  Enterprises tend to stick to standards for desktop purchases.  Look at enterprise procurement cycles ... If I'm purchasing a block (say 1000) pc's, today, I'm not going to buy the fire sale stock.  Why?  because when my next procurement cycle comes around, those models will no longer be available.  that matters because I'll have to do a significant amount of work adjusting my images, etc.  or say, (specifically in our case) two companies merge ... they have Dells, we have HP's ... today in order to have a common image, we'll need to determine a single vendor in order to simplify our pc deployment.  IF VMware pushes for a certification program by vendors, then this problem simply goes away.  I can deploy one common image on all Certified platforms ... i can buy all those fire sale machines because I no longer care about future product so long as I make sure to purchase those that are VMware certified.

if it simply does this, it's incredibly useful ...
in it's infancy, the product has to get the standard stuff right.  Get the basic stuff out of the way and then appeal to the myriads of other possible potentials.  and I don't think a type 1 hypervisor is the answer for employee owned equipment ... I think a type 2 hypervisor makes more sense in that scenario.

I'm not saying this is exactly what VMware is doing, but it makes the most sense.  I seriously don't see this as a play to the SMB marketplace ... not yet.  Again, first gen product.


I don't see the "myriad of drivers" situation being a problem for client-side Type 1 hypervisors. Most add-on hardware goes through USB/Serial/Parallel ports. If the hypervisor accurately communicates with the USB port, then any driver the user needs will be installed on their (virtualized) OS as usual. Sure, someone may have a workstation where they need to plug in some strange PCIx card. Even so, couldn't the hypervisor just virtualize the PCIx slot correctly and let the virtualized OS run the real driver? OK, if the user expects to run more than one virtualized OS at once then there will be device-sharing items to manage. However, I the embedded hypervisor will still bring many administrators comfort even if users can only run one OS at a time on top of them.