In the older days of server-based computing, we could get away with telling our users that we didn’t support multimedia, and users knew it wouldn’t work so they didn’t even try. Life was simpler then.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
But now things are different. First, there’s a lot more multimedia in the corporate world that are used by people for actual business purposes. (BTW, we have over 250 videos available for free at brianmadden.com/videos.) Second, in today’s world we’re trying to deliver complete and fully featured desktops instead of a few tactical apps here and there.
When it comes to multimedia and remote display protocols, there are two ways we can handle it:
- Render the media on the remote host and send it to the client just like any other visual screen element.
- Recognize that the media stream should be something special. Have the remote host pass it down to the client in its original format where the client will render it locally.
Host-based media rendering
Rendering media on the host and delivering it via a general purpose remote display protocol just doesn’t work. Even if you have tons of bandwidth, the protocol engines themselves just can’t render, recompress, and send that much data. (Check out our video from the Qumranet testing of video over ICA and RDP.)
This doesn’t mean that host-based rendering can’t work—it just means it doesn’t work via ICA and RDP. There are other protocols out there, like Teradici’s PC-over-IP and HP’s RGS that do everything—video included—via host-side rendering. Of course these protocols were purpose-built for this, they consume a lot of bandwidth (certainly more than the native media stream they’re rendering), and they typically consume a lot of host-side CPU. (Or in the case of Teradici, they require special host-side hardware.)
Client-based media rendering
Fortunately it didn’t take too long for people to realize, “Hey, this host rendering might be kind of crazy. If we already have this media stream in its original compressed form, why not tell the host to leave it alone and to just shove it down to the client in its original form? Then the client can render it locally.”
This technique, commonly called “multimedia redirection,” (or “MMR”) has been in ICA for five years and is now in RDP 7 as well. It’s also part of third-party RDP add-ons like EOP and TCX. MMR is great because it doesn’t consume stupid amounts of bandwidth and the media plays back at native performance since it’s being played locally on the client.
The downside to MMR is that your client has to have the capabilities (both in terms of codecs and hardware capacity) to decode each media stream it hopes to play.
Wyse’s announcement: cheap thin clients with dedicated streaming media chips
The reason we’re talking about server-versus-client media stream rendering is because Wyse just announced a new series of thin clients (the “C” class) that has a dedicated media stream processor on board. This processor (a Via VX855) provides hardware acceleration for up to 1080p playback of common media types, including H.264, MPEG, DivX, and WMV9.
It’s important to note that this Via chip is not a GPU. GPUs have been in thin clients for years. This is more like the chip that’s inside a Tivo that enables it to play back HD video even though the main CPU could barely run a calculator. (Seriously. You ever wonder how your Tivo can record two separate 1080p stream at once, but it takes 6 hours to download the guide? This is why!)
So this Via chip basically lets Wyse take their cheap “S” class hardware and give it the video playback performance of their high-end “R” class devices. (The “C” class devices start out with MSRP of $350, which means you should be able to find them for about $300 on the street.)
The new “C” class devices work in conjunction with Wyse’s TCX extensions for RDP. (One of the extensions is for multimedia redirection, so if a “C” class device receives a redirected media stream that the Via chip can handle, it takes care of it automatically.)
If you decide to stream Windows 7 to your “C” class device, Windows will also recognize the Via chip automatically and start leveraging it. Actually it seems like the only thing that can’t leverage this chip right now is ICA. (Since Citrix’s HDX MediaStream doesn’t know to look for the chip, it will just have the client CPU do the rendering.) But I would think this would be a relatively easy thing for Citrix to support in the future.
The future: more on the client
I really like the concept of a cheap thin client having the capability to render H.264 locally. (I focus on H.264 because so much is going there, like Flash, Silverlight, Quicktime, etc.) But let’s face it: it just makes sense for some things (like media streams) to be rendered directly on the clients. Who wants to spend so many CPU cycles doing things in the datacenter that will just increase your bandwidth and decrease your user experience. Spend the cycles where they make the most sense.
Wyse will be demoing the C class devices at VMworld next week. Gabe and I will be there with our video cameras, so we’ll be sure to grab some demos.