Remote display protocols: low bandwidth, good user experience, low CPU... pick any two.

There's been a lot of talk about remote protocols recently. We know Microsoft is cooking up something with their acquisition of Calista, and VMware's agreement with Teradici seems to be picking up steam.

There's been a lot of talk about remote protocols recently. We know Microsoft is cooking up something with their acquisition of Calista, and VMware's agreement with Teradici seems to be picking up steam. And of course Citrix is talking about HDX, HP (still!) has RGS, Sun has ALP, and Quest recently launched EOP. And let's not forget Microsoft's RDP7 plans and Wyse's TCX. (Phew! Did I miss anyone?) Oh yeah, how about Red Hat's spice?

All of these protocols lead people to ask many questions. (Well, two questions really: "What about performance?" and "What about bandwidth?") The answers to those two questions are typically direct inverses of each other. In other words, the better the relative performance of a protocol, the more bandwidth it requires. Or, in other words, limited bandwidth typically leads to worse performance.

But there's something generally missing in a lot of these "bandwidth versus performance" remote display protocol talks, namely, CPU utilization!

This can mean CPU on the remote host, CPU on the client, or both. And it can mean actual primary CPU, GPU, virtualized GPU, external GPUs, or proprietary ASICs. The point is when weighing remote protocol performance, you often have to deal with a trade-off between bandwidth and performance, but having high or low CPU requirements can drastically change that equation.

For example, if you had massive CPU / GPU capabilities on the remote host, you could afford to "waste" a lot of CPU time on massive compression that was easily uncompressed. This could potentially allow you to have good performance *and* relatively low bandwidth, but at the cost of high CPU usage.

Or you could do something like the DirectX 10.1 remoting or multimedia pipeline redirection that Microsoft is introducing in RDP 7, which will give us great experience over low bandwidth, but again, with relatively high client-side CPU requirements (since that's where all the "real" work is being done).

Of course if you really want low CPU on both the host and the client, you can get that as long as you (a) don't mind a protocol that's a bandwidth hog (spice), or you don't mind poor user experience (RDP 6.x).

Thinking through all these examples, we can really break it down like this... Remote display protocols have three characteristic dimensions:

  • Bandwidth (low versus high)
  • User experience (good versus bad)
  • CPU utliization (low versus high)

For any remote display protocol in the world, you can choose any TWO of the three items on the list. All three are impossible. (This is like the old saying about "fast, cheap, or good -- you only get two out of those three.")

I'm sure there is a cool way to build a sort of 3D cube version of Gartner's Magic Quadrant, although I couldn't find any simple way to do it in Excel. (From what I could tell, all the "3D" graphs still only dealt with two dimensions of data.)

Can we apply this to the future of remote display protocols?

As we think through the three dimensions of remote display protocol analysis and ranking, we can probably also use these to take some guesses of what might happen in the future. Look at the three dimensions again:

  • Bandwidth
  • Experience
  • CPU

Which of these is the most likely to evolve the fastest moving forward? My guess would be "CPU." In other words, if I were to put my money on the way that protocols will evolve, I'm going to guess that we'll most easily be able to throw processing power at the problem.

Today, we're probably looking at Teradici for that, since they throw processing power at the problem in the form of a custom ASIC on the remote host and in the client device. But this type of thinking would also lead us to believe that VMware's Teradici-derived PC-over-IP implementation might actually be pretty decent, since they can throw as much CPU (relatively speaking) at the problem as they want.

This line of thinking also probably means that Microsoft is in pretty good shape with Calista, as they'll be able to leverage more and more CPU power (at both ends) moving forward.

If you want to think way far ahead (like maybe 5+ years), think about what the GPGPU trend could do for remote display protocol power? ("GPGPU" is "general purpose GPU," which is the idea that GPUs might be able to be modified slightly to be used for more general purpose application tasks, heavily augmenting or potentially outright replacing today's CPUs. Think of what a VM host that only had GPUs could do for remote display protocol rendering?)

 

 

Join the conversation

10 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

You're missing one of the most important elements. Network latency. This has the single biggest impact on performance. In a very low latency environment then this article is valid but if you start to push out across continents or countries then things change considerably.


Cancel

I agree.. latency is a part of the problem, but there is also many solutions to optimize latency as it is now. But maybe the 3 dimensions should be network-demand instead of bandwidth?


But i also do think bandwidth is just as important, the scenarios where i am seeing these deployments now are typically relatively short distances, therefor bandwidth is much larger issue than latency.


Rene Vester


Cancel

Regarding latency, I thought about that, but I didn't feel that was a characteristic that could be balanced out by CPU. (In other words, yes, latency's important, but high latency isn't "fixed" by CPU.)


But I hear your point.. maybe changing this to something more generic like "good network" versus "bad network" might be better, at least in terms of illustrating the point.


Cancel

A really good protocol solution will need to be adaptable to various scenarios.  Pushing work out to the client whenever the client has the capability for the work is desireable, but automatically falling back to the server side source when needed will be a must.


A server side multi-GPU (GPU rendering for multiple sessions that replace kernel software rendering) should also be considerd.


Cancel

There is nothing that can truly optimize latency. Yes there are Wan accelerators that bost all types of improvments and such but what it really comes down to is that the speed of light IS the speed of light. You can't fight physics. The simple truth of it all is that if you have 400ms of latency using anyone's protocol to remote users they are going to have a bad experience. I dont care what hardware you put between the sites. If the lines are stable and the pipes are not fully saturated and you still have high latency your going to have to start looking at closer datacenters to get better performance.


So that being said, I dont think latency should be on the list and brian was correct to leave it off. If you have bad latency it doesnt matter which or who's protocol your using.


Cancel

Regarding Latency


Latency is fixable by CPU, but only as a function of the remote display service's native capability.  


Consider how ICA's local text echo feature uses client-side rendering of keyboard input to hide latency effects from the user.  It does so by 'guessing' the appearance of the text as it is entered and then replacing this displayed test with the 'true' display as soon as it can catch up.  This is done at a small additional cost in client and server CPU and to a lesser degree bandwidth.


If you accept this as a starting point, it should be clear that the ability of a remote display protocol to use this approach to address many latency effects comes down to the willingness of a vendor to invest the time necessary to develop the capability (that and any patents that might stand in the way) .


Whether a remote display protocol chooses to use CPU/GPU resources to address latency is more a matter of a vendors recognition of their being a business advantage to doing so than there being a hard technology barrier that cannot be broken.


Cancel

Latency does matter, and any optimizations in a protocol stack that mask it's effects are of value. Good exanple above in ICA's local text echo. That said longer term having dynamic protocols that adapt to conditions make a lot of sense, as long as they don't lock you into custom chip sets. In addition from a user perspective, I think one would have to really think hard about the capabilities offered with one approach vs. the other to keep the user experience predictable. Stupid users are stupid, and it will be hard to explain to the masses why their experience varies so much. So in the end, I personally think it may be easier to implement using one generic protocol that fits most purposes, and then attach specialized options on an application basis that would be easier to explain to stupid end user.


Cancel

Yes, I totally agree that certain technologies can mask the issue. Local text echo is indeed a perfect example. Masking a latency issue and fixing it are 2 different things. So basically Local Text Echo will cache the input for the user and when bandwidth becomes available it will sync up sorta speak. I am a very big advocate for Citrix and their technologies, That being said if you have a link that steadily has high latency (400+ ms) the user will experience performance issues at some point. I dont want to start a flame war or anything here. Truth is, Citrix ICA protocol has been in the market the longest as the best remote protocol out there when taking bandwidth/latency problems into account. ICA is widely deployed and has many features built in to do exactly what was stated. It will scale to fit the pipe and do certain ( or not do certain ) things to make sure the user end experience is as best as can possibly be. The fact of the whole matter is there comes a point in ANY protocol that it will fall flat. Depending on who's tech your looking at, that threshold may be different. Users and Admins really have to have realistic expectations when choosing their remote protocol. Everyone must accept that at some point any given remote protocol will fail and give a negative experience.


so here is my one sided opinion. Citrix stands the tallest currently with ICA. The have been doing this the longest, have gotten the most customer feedback, have the biggest install base and have had the time to mature their protocol and technology. I think it will be difficult for any other vendor to compete at this point on just a protocol standpoint. VMware is doing its thing by working with another vendor. I think this is going to be very challenging for them at first. First they are trying to take the remote protocol from hardware to a software solution. That in itself has to be difficult. Then they have to come to the market with another "Experimental" feature. From what Brian stated, this solution will be LAN only, and for the WAN its going to be RDP with TCX, Which again is going to be difficult. While TCX is great, it does not have the feature set that ICA does. Running 2 different protocols both of which will be a 1.0 for VMware View is going to be very challenging for them to properly market.


Cancel

Sun's ALP does a good job on higher latency links. It's still just RDP behind it but can cope with much higher latency. In my tests it did better than RDP or ICA in terms of usability and user perception. Pity it only works with the Sun Ray hardware


Cancel

RES PowerFuse Workspace Extender.  Patented.  Proven.  Available!


Of course I have a bias here, but Workspace Extender matches closely the approach Tim references in his comments above.


As a sales engineer for RES I am demoing this product for customers and they love it!  Why send MM over the network when you don't have to?  No matter how you slice it, sending MM over the network will NEVER work as well as locally rendered material.  So there you have it, bias and all.  Thanks.


RTE


Cancel

-ADS BY GOOGLE

SearchVirtualDesktop

SearchEnterpriseDesktop

SearchServerVirtualization

SearchVMware

Close