Is the future of remote display protocols based on hypervisor integration?

Listen to this podcast

When I wrote about Net2Display's 1.0 specification release last week, I reached out to several different industry experts and committee members for opinions to include in the article.

When I wrote about Net2Display’s 1.0 specification release last week, I reached out to several different industry experts and committee members for opinions to include in the article. One of those folks was Desktone CTO Clint Battersby. In addition to providing comments about Net2Display, he made some off-the-cuff remarks about his thoughts of how remote protocols could evolve. I mentioned some of his remarks in that Net2Display article, but I think they’re interesting enough to warrant their own conversation.

So welcome to Brian’s interpretation of Clint’s protocol thoughts!

Clint’s opinion is that the real opportunity and proper packaging of remote display protocols is integrated with the hypervisor, just like VirtualBox does with RDP and Qumranet does with with Spice. He claims doing so provides three advantages:

  • Guest OS independence, since no guest OS software is required
  • Much closer to normal desktop experience, since you watch the machine boot
  • Eliminates the complex feature matrix in current protocol/guest OS/client device implementations

Again, these were just side comments that Clint made, so I want to add on to the conversation from the perspective of refining his initial thoughts rather than calling him wrong.

So after thinking about this a bit more, I’m not so sure about Items 1 and 3, because there could be situations where you would want an a component of the protocol agent inside the guest VM. (This is what Qumranet does with spice.) So in that case you’re actually moving to a three-tier protocol approach, with a guest VM component, a hypervisor component, and a client component.

That said, another advantage of this is performance. Once of the reasons Qumranet claims to get the spice performance that they get is because they can dedicate cycles to protocol remoting outside of the guest VMs, where resources and be more appropriately allocated. (And think about the future potential for this to leverage GPUs, PC-over-IP chips, Calista chips, etc.)

A final advantage occurred to me when Client references these as “remoting protocols” versus the term “remote display protocols” that I used. Clint’s term reminds me that these are about more than just display. They’re also about client devices, peripherals, USB, etc. And since that’s something that has to be emulated, paravirtualized, or passed-through to the guest anyway, I would think that hooking that hypervisor up with the remote protocol engine could make some cool things happen here?

Regardless of the specifics, I think there’s a lot of potential here. Let’s step through the protocol / hypervisor combinations one-by-one.

Citrix ICA/HDX & XenServer

Clint said that he was surprised that Citrix didn’t integrate ICA into a commercial version of XenServer within six months of acquisition.

I think that would be cool for XenServer, although it would be tough for XenDesktop because it would probably make it platform-specific.

Then again, the HDX 3D that leverages the Nvidia CUDA-based GPUs is kinda sorta moving in this direction anyway, so that could always be an option.

Red Hat / Qumranet Spice & KVM

Clint wrote that he’s disappointed we have not seen or heard much regarding KVM/Spice in a long time.

Me too.

Microsoft RDP/Calista & Hyper-V

Clint wrote that while there’s no official word from Microsoft about RDP 7/Calista integration with Hyper-V, he feels that would be a huge win given the broad availability of RDP on client access devices.

I know Microsoft hasn’t released too many details about Calista yet, although we do know there will be a few modes of operation, including sharing physical GPUs across multiple VMs and creating multiple virtual GPUs for VMs when real GPUs don’t exist. I would expect that both of those would only work via Hyper-V. The question is whether that would be an RDP-only thing, or if companies like Citrix would be able to leverage those capabilities via ICA if they run XenDesktop on Hyper-V.

VMware / Teradici PC-over-IP & ESX

Clint wrote that VMware appears to be on the right track integrating PC-over-IP with their ESX hypervisor but, we’re still waiting for a production release of the entire stack.

I don’t really have any more info than that, but I definitely need to learn more about how VMware’s software implementation of PC-over-IP actually works. Are they emulating a terachip? If so, is it in the VM or the host? Or did they rewrite that code to execute natively? And where?

So lots of questions still.

Net2Display & the open source Xen or KVM

Clint wrote that he believes the best opportunity for Net2Display given the backdrop of hypervisor / protocol wars brewing would be to integrate with an open source version of XenSource and/or KVM.

Seems like a good as plan as anything for them as far as I’m concerned. I’m really not excited about Net2Display anymore, so I don’t even know if I care what they do. Maybe I’m just impatient.

So there you have it. What are your thoughts? Does integrating the remoting protocol with the hypervisor make sense? Which vendors have the most to gain? Who’s got the most to lose?

[TECH NOTE: We're continuing the experiment of doing an audio version of our article. You'll find a link at the top of the article near my avatar for an MP3 attachment. If you're accessing the RSS feed of this article, the attachment will be recognized as a podcast.]

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

It's not fun to read comments anymore when AppDetective is absent. He/she managed to get censored out over at but is yet to again grace this place.

Get back pronto!

At topic:

I think it's such a shame that nobody comes up front speaking about MS OS 8 (pun intended) beeing ground up based on the next version of Hyper-V (Think XCI) with App-V fully integrated (yes, servers too. Next SQL will be delivered in App-V)

All of this silence is quite annoying. Sure. The plan for world domination is hindered by all juridical crap, anti monopoly and what *** not.



What does hypervisor have to do with remote display protocol ? It could be cool to get it inside to remote access virtual machine directly through the hypervisor instead of through network to virtual machine but :

1- it will only cover virtual machine and everything will not be virtual (bladePC for designer, ...)

2- you give to the hypervisor another role than "schedule and share physical I/O for virtual hosts"...

Not sure it is the right direction for hypervisor even if it can add some value short term...



HyperV (runs media files real good)

ESX (runs fud media real good)

XenServerhttp://x.y.z (runs flash real good)

RedHathttp://x.y.z (runs three tier fud real good open source perhaps)

Now we can run on any browser, eliminates browser testing HUH!!!! We have a protocol for the web HTTP, F'ing the world by marrying the protocol to the hypervisor is BS. How the hell are you going to figure what to run where, when, how and manage all this together. It will be a mess.

Kimo is right, MS kicked me out was kind enough to let me leave a comment. MS aholes are just a monopoly spreading fud to lock in the world to crappy RDP 7 that will only work on Hyper-V with Windows managed by crappy systems center as the only option. What a freaking joke and loss of a major benefit of VDI, i.e. access from any device on any OS. Wake up RDP with calista and Hyper-v is a wedge that MS is using to F ESX, that's all. They don't give a damn about VDI. App-V is the other wedge with MDOP $$$.

Hey let me throw this out and use an overused word CLOUD. If that is really going to become real and I can pick and choose various bits and pieces from here and there, it means I better be able to have interop of my IT stack. Otherwise I am screwing myself, and will be wondering why I was so stupid to lock into vendor X.  

So PLEASE. the presentation protocol has nothing to do with the Hypervisor. If it does, run from that protocol in my view. Sure the vendors will argue use my hypervisor because I have optimized it for workload X, and my answer is FU you are locking me in. Just like it's a desktop (stolen from Citrix I agree) it's a protocol not a hypervisor....


The key technical difference between remote display protocols that are implemented inside the box (OS) and those implemented outside the box is their access to OS display information. Display protocols that are implemented wholly outside the box, be it in the hypervisor or external hardware (such as Tera cards), essentially look at the display data from the perspective of the monitor - as a sequence of raw images. As a result, purely external protocols cannot do things like multimedia redirection or seamless windows. This can limit the capabilities and performance of outside-the-box protocols. Some external protocols compensate for this by using specialized hardware (Teradici) or by sitting closer to the physical device. (It is a bit amusing that virtualization companies tout the benefits of running code outside a VM (and not for security). Others utilize agents running inside the box, e.g. SPICE. But once you require such agents you can lose the benefits the Clint mentioned.

Bottom line: choose the display protocol that works best in your scenarios (cost/performance), regardless of implementation method. And, as appdetective stated, avoid vendor lock-in whenever possible.


I can see a smart hybrid using both.

The hypervisor integrated protocol (HIP?) is active until (and only if) an agent from inside the vm is detected.

For example - hypervisor integrated protocol (HIP) for the boot process, then vm agent takes over when the guest OS has loaded.

Or if you say HIP can do some things better (performance/peripherals  wise) then it will handle those all the time and IF an OS agent can improve things  (like multimedia redirections) it will. (either by communicating though the HIP or by communicating directly to the client)

You can get the pluses of both worlds, it just an architecture problem. (Which is a rather exiting one to solve I might add.)

Vendor lock-in could be a problem but not necessarily. If Citrix would integrate parts of ICA into XenServer in order to improve performance and functionality it doesn't mean that 'normal' OS-Only ICA will stop being available.

(sorta same as with Branch Repeater. If you got it in place it takes over some of ICA functionalities like encryption, etc)


One of the biggest challenges I have seen in enterprise deployments of VDI is the lack of control over the VM itself.

By installing client side remoting solution within the VM itself creates the following problems:

1. If the machine hangs, what does the user do? Reboot the VM from the connection broker interface? How does the connection broker/hypervisor check the running state? By the time the connection broker layer has detected that the machine  

is non responsive (and this could mean that the agent is down but the OS is up), the user is already suffering an outage.

2. Remoting Server process on the Virtual Machine is bound by the resources constrained by the VM itself. Thus if I am running an application that is hammering the virtual machine, then everything including the Remoting server process  

will be impacted. Example: File I/O's on a VM, high network utilization and high CPU/Memory loads will often cause RDP sessions to disconnect or hang spontaneously. Why? Because there is resource contention between the Terminal Service  

and the rest of the processes running on the VM.

By integrating the remoting layer at the Hypervisor layer will provide the following benefits:

1. Provide the user with an equivalent KVM experience (or IPMI 2.0)

2. Close to real time hardware scheduling (avoids the contextual limitations of the OS container) of external peripheral events such as video updates, USB bus updates etc from the VM itself not the OS. If more resources are required, the Hypervisor can provide them dynamically as  

opposed to being bound by the container constraints enforced by the VM itself.

3. If the VM hangs, the user can actually reboot it (this is technically possible with the client version but requires the user to actually go back to the connection broker interface and hope for the best.

4. Performance will be significantly better - determining where the latency happens is a challenge in the best of environments. Is it the network? Is it the application? is it the VM? sure there are ways of identifying this but it is  

difficult. By isolating the remoting session away from the VM itself will give you not only performance benefits but provide much better stable environment for the user. If the OS hangs, the user can see that it is the OS. If the VM  

itself hangs; the user can see this as well.

5. OS agnostic - VNC for Linux and RDP and everyone else for Windows. Don't we want a ubiquitous computing environment? Why bind ourselves to the OS?

6. Brokering is done at the VM level NOT the OS - OS support goes away (aside from building dynamic images and deploying) brokering can be done with basically anything and everything that is supported at the VM layer.

The ramifications of brokering at the VM should be thoroughly investigated especially in the context of dynamic composition etc.

The ultimate solution is described by a previous poster - we still need a client agent within the VM itself to monitor - who logged on where (for dynamic pool resets, log on/log off data), performance enumeration and statistics gathering and  


On a final note. Citrix is banging the drum about Hypervisor centric protocols lock you into the vendor. Whilst this is true at the moment, I am sure this will change depending on who owns the most adopted protocol(i.e. Teradici). On the other hand Citrix's argument is hypocritical in that HDX locks you into ICA , the connection broker and in some cases the hardware platform.