There's been a lot of talk about remote protocols recently. We know Microsoft is cooking up something with their acquisition of Calista, and VMware's agreement with Teradici seems to be picking up steam. And of course Citrix is talking about HDX, HP (still!) has RGS, Sun has ALP, and Quest recently launched EOP. And let's not forget Microsoft's RDP7 plans and Wyse's TCX. (Phew! Did I miss anyone?) Oh yeah, how about Red Hat's spice?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
All of these protocols lead people to ask many questions. (Well, two questions really: "What about performance?" and "What about bandwidth?") The answers to those two questions are typically direct inverses of each other. In other words, the better the relative performance of a protocol, the more bandwidth it requires. Or, in other words, limited bandwidth typically leads to worse performance.
But there's something generally missing in a lot of these "bandwidth versus performance" remote display protocol talks, namely, CPU utilization!
This can mean CPU on the remote host, CPU on the client, or both. And it can mean actual primary CPU, GPU, virtualized GPU, external GPUs, or proprietary ASICs. The point is when weighing remote protocol performance, you often have to deal with a trade-off between bandwidth and performance, but having high or low CPU requirements can drastically change that equation.
For example, if you had massive CPU / GPU capabilities on the remote host, you could afford to "waste" a lot of CPU time on massive compression that was easily uncompressed. This could potentially allow you to have good performance *and* relatively low bandwidth, but at the cost of high CPU usage.
Or you could do something like the DirectX 10.1 remoting or multimedia pipeline redirection that Microsoft is introducing in RDP 7, which will give us great experience over low bandwidth, but again, with relatively high client-side CPU requirements (since that's where all the "real" work is being done).
Of course if you really want low CPU on both the host and the client, you can get that as long as you (a) don't mind a protocol that's a bandwidth hog (spice), or you don't mind poor user experience (RDP 6.x).
Thinking through all these examples, we can really break it down like this... Remote display protocols have three characteristic dimensions:
- Bandwidth (low versus high)
- User experience (good versus bad)
- CPU utliization (low versus high)
For any remote display protocol in the world, you can choose any TWO of the three items on the list. All three are impossible. (This is like the old saying about "fast, cheap, or good -- you only get two out of those three.")
I'm sure there is a cool way to build a sort of 3D cube version of Gartner's Magic Quadrant, although I couldn't find any simple way to do it in Excel. (From what I could tell, all the "3D" graphs still only dealt with two dimensions of data.)
Can we apply this to the future of remote display protocols?
As we think through the three dimensions of remote display protocol analysis and ranking, we can probably also use these to take some guesses of what might happen in the future. Look at the three dimensions again:
Which of these is the most likely to evolve the fastest moving forward? My guess would be "CPU." In other words, if I were to put my money on the way that protocols will evolve, I'm going to guess that we'll most easily be able to throw processing power at the problem.
Today, we're probably looking at Teradici for that, since they throw processing power at the problem in the form of a custom ASIC on the remote host and in the client device. But this type of thinking would also lead us to believe that VMware's Teradici-derived PC-over-IP implementation might actually be pretty decent, since they can throw as much CPU (relatively speaking) at the problem as they want.
This line of thinking also probably means that Microsoft is in pretty good shape with Calista, as they'll be able to leverage more and more CPU power (at both ends) moving forward.
If you want to think way far ahead (like maybe 5+ years), think about what the GPGPU trend could do for remote display protocol power? ("GPGPU" is "general purpose GPU," which is the idea that GPUs might be able to be modified slightly to be used for more general purpose application tasks, heavily augmenting or potentially outright replacing today's CPUs. Think of what a VM host that only had GPUs could do for remote display protocol rendering?)