A deeper look at VMware's upcoming bare-metal client hypervisor

At VMworld last month, VMware announced several future capabilities of their VDI product, including a bare-metal client-side hypervisor. The idea behind this is that if you have a hypervisor running locally on a client device, you can get the "best of both worlds," combining the centralized management of VDI and performance and flexibility of local compting.

At VMworld last month, VMware announced several future capabilities of their VDI product, including a bare-metal client-side hypervisor. The idea behind this is that if you have a hypervisor running locally on a client device, you can get the "best of both worlds," combining the centralized management of VDI and performance and flexibility of local compting. Having a local / offline capability will be an important feature of future VDI environments. (Check out Recommendation #3 in the 2010 VDI+ vision blog post.)

There are several advantages to running a hypervisor on a client device:

  • The hypervisor provides generic hardware to the VM, so a single disk image can be used on very different types of devices.
  • Since the VM is running locally, it works offline, and you don't have to worry about thin client remote display protocols.

Of course products like VMware ACE and Microsoft Enterprise Desktop Virtualization (Kidaro) have allowed users to run VMs on their clients for years. But those products were "Type 2" hypervisors, (or Virtual Machine Monitors), which installed on top of an existing OS like a regular application. VMware announced that they will release something that's more like a "Type 1" hypervisor, where the hypervisor itself is the actual OS (like ESX Server). Type 1 hypervisors are typically viewed as having better performance and just being more of a "real" solution in general. On the server side, Type 1 hypervisors are absolutely dominating the market. On the client side, however, there aren't any Type 1 hypervisors in use.

Why would you want a Type 1 hypervisor on a client?

VDI is all about management, especially when you're talking about client-based VDI. Everyone reading this can understand the value proposition of being able to create a single disk image that could be used by multiple users. And everyone reading this can understand that use cases exist where you'd want the disk image to run locally on a client device. So how can we combine these two?

Citrix Provisioning Server (Ardence) showed a lot of early promise. With Provisioning Server, you can create a single disk image that's shared by hundreds or even thousands of users. This works perfectly as long as the client devices that you're deploying this image to are identical. (Or at least identical enough to use the same image.) But what if you have a several different types of client devices? This means that you need to build several different images, and it means that you won't be able to support any random client device.

"No problem," people think, "We'll just throw a hypervisor on the client devices and use Provisioning Server with a VM on the client instead of the bare-metal client." While this is a simple theory, it's not so easy in practice. Existing Type 2 client-based hypervisors require an underlying base OS. So how do you manage, deploy, maintain, and patch that OS? And if you're taking the time to manage that, then why even bother with the VDI instance? If you think about it, managing a local OS on a client device while also managing a VDI OS is actually the worst of both worlds. Now you're managing two OSes per user instead of one.

This is the exact problem that a Type 1 bare-metal client hypervisor can solve. In this scenario, there's only one OS to manage. (This is the point at which purists shout, "Hey! The Type 1 hypervisor is an OS, so you're still managing two OSes." I guess that's technically true, but the Type 1 hypervisor is much easier to manage than a "real" OS, and the general idea is that it would be transparent. Even though a Type 1 hypervisor is technically a piece of software, it can be managed as if it's an extension of hardware.

It's probably worth mentioning that Type 2 client hypervisors will continue to exist even once Type 1 hypervisors for client devices are released, because each type has its own use case. Type 1 hypervisors are great for when you want to "replace" (or provide the only OS) that's used on a client. They're great for when you want a user to turn on a machine and only see a single OS that looks and feels local.

Type 2 hypervisors are (and will continue to be) great when you want a user to have access to their own local desktop OS in addition to the centrally-managed corporate VDI OS. This could be for an employee-owned PC scenario, or it could be a situation where you have contractors, etc., who need access to their stuff and your stuff.

Perhaps some day will see a hybrid hypervisor that combines the best of Type 1 and Type hypervisors. Maybe you could have a Type 1 hypervisor that boots a disk image from a USB stick, but that can also (at the same time) boot the contents of the client's physical hard drive into a second VM which is then accessed from within the first primary VM. Sound tricky? Maybe, but VMware is already doing something similar with their Fusion product. (Fusion is like VMware Workstation for Mac.) Since Macs can run Windows now, a lot of people use something called "Boot Camp" to configure their Macs to be able to dual boot between Windows and Mac. With the new version of Fusion, you can boot to the Mac OS, run Fusion, and then boot up your Windows partition live in a VM while running the Mac OS. (And this is non-destructive. You can still then reboot your Mac and boot into the Windows partition natively.)

Challenges of Type 1 client hypervisors

There's a reason that Type 1 hypervisors have existed for five years in the datacenter while they're only just now coming out for client devices, and that's because building a Type 1 client hypervisor is actually really hard! It's not as simple as just installing a server hypervisor on a laptop. A Type 1 hypervisor running on a server is built to host multiple VMs, and the design goals center around making each VM seem like a "real" server on the network. A client-side Type 1 hypervisor would have a completely different goal, mainly, that the VM running on the client should "feel" like a normal local computer. In fact, the user probably shouldn't even know that a hypervisor is there or that they're running a VM.

That said, consider the following challenges that a Type 1 hypervisor running on a client device would face:

  • Hardware compatibility. In the grand scheme of things, there aren't that many different server models in the world. Since a Type 1 hypervisor is the actual OS, it needs to support (with drivers, etc.) whatever hardware it's installed on. And there are probably, what, 50-times more laptop and desktop models in the world than servers?
  • Local graphical performance is important on a client device, so the hypervisor needs to make sure it exposes the local GPU and graphics capabilities to the VM.
  • If the client device is a laptop, then the VM running needs to know that it's a laptop. This means that the hypervisor needs to expose the battery and power states to the VM, it needs to expose the power savings and CPU speed stepping technology, and it needs to expose the suspend / resume / hybernate states.
  • USB ports and devices must be passed-through perfectly to guest VM.

VMware' client hypervisor plans

Taking all that into consideration, VMware announced that they would release a "bare metal" hypervisor for client devices. At this point we don't have a ton of details, but between the press release, the onsite VMworld lab, and talking to VMware employees, I think we can get a pretty good sense of what they want to do. This is part of their larger "vClient" initiative, which is their marketing way of talking about how a user's desktop could follow them to any location.

The most important point about VMware's client hypervisor is what it's not. VMware's client hypervisor is not some sort of "ESX for desktops." ESX has been designed, built, and refined to be a server hypervisor, and the specific requirements of a client-based hypervisor are completely different. (Think "graphic performance" versus "network performance," etc.) Sure, there are elements and know-how in ESX that can be used as a foundation for a client-side hypervisor, but it's not a straight port of the product.

In addition to ESX, VMware was also able to draw from products like ACE (a centrally-managed Type 2 hypervisor) and VMware Workstation when designing the client hypervisor.

From a pure technical standpoint, it appears that VMware's client hypervisor will be something like VMware workstation for Linux running on some version of Linux with the central management of something like ACE. I asked some folks from the desktop team point-blank whether their client hypervisor was really a Type 1 hypervisor, or a Type 2 hypervisor running on Linux, and they did admit that they built it on are Linux capabilities, but they stressed that they want to downplay that.

They don't really want to mention that it's a hypervisor running on Linux because they don't want people to think that it will be a big problem, or that you have to install Linux first and then install their hypervisor. They want people to know that it's a single "thing" you install, and whether or not that thing includes some Linux code shouldn't matter to the user.

On a side note, while I agree with this 100%, I think it's ironic that VMware attacks KVM by saying that since it runs on Linux, it's extra complex and you need to install Linux first, and now all of the sudden that they have a solution like this, they hide it and say it's no problem. Because with KVM, it's also no problem. You pop in the DVD and follow the prompts. The fact that it runs on Linux is also 100% transparent to the user.)

But for VMware's client hypervisor product, starting from Linux certainly makes sense. Right off the bat you've got a huge device compatibility list, and you've got a desktop OS that understands batteries, GPUs, and the other essential elements of a desktop OS.

It's too early to know what devices VMware will officially support with their client hypervisor. Right now they're looking at having a focused group of certified devices, but it will be possible to install this thing on other devices too. They're thinking that they will support several deployment options, like locally-installed on disk, USB-stick based, and even some kind of embedded "ESXi-like" capability.

The competition

As I wrote towards the beginning of this article, there aren't any mainstream Type 1 client hypervisors. When talking to the VMware folks at VMworld, they repeated several times that they've put a lot of work into this, and they feel that they're really ahead of the competition.

In the Type 1 client hypervisor space, there are a few other companies or products to consider:

Of course there are a lot of non-bare metal Type 2 hypervisor products out there too, like Microsoft Virtual PC / Kidaro, VMware ACE, MokaFive, and RingCube.



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: