Why thin clients and zero clients haven't lived up to "last PC you'll ever buy" hype. (Part 1 of 2)

Listen to this podcast

I've been collecting thoughts on this blog article for a few weeks now. It seems this was quite timely given Brian & Gabe's recent interview with Tom Flynn from HP where they talked about thin clients, zero clients, etc.

I've been collecting thoughts on this blog article for a few weeks now. It seems this was quite timely given Brian & Gabe's recent interview with Tom Flynn from HP where they talked about thin clients, zero clients, etc. Here are my thoughts on the topic.

If you've been around the SBC / VDI industry for any length of time, you should know all about thin clients. Thin clients were the devices that were going to usher in the end of the PC industry as we know it. The benefits of a thin client over a full-fledged PC are numerous (in principle), including:

Less moving parts = less component failures

Not having a spinning hard disk is one major reason why thin clients should have a longer life. If you look at the failure rates among PC components, hard disks rank as one of the highest failure rates among the components that make up a typical desktop PC. Look at this chart from a Carnegie Mellon University paper on the topic of component failures in PCs published in 2006:

Source: Parallel Data Laboratory - Carnegie Mellon University

While hard drive manufacturing practices have improved since 2006, the rate of failure vs other system components is still pretty much the same. Changing the spinning hard disk to an SSD drive may ultimately improve the reliability (though that is currently being debated as to whether or not it actually reduces failure rates) but it certainly won't help the cost of the PC devices. Thin client manufacturers must keep costs down in order to be competitive against a common PC that it's attempting to replace. By a thin client not having a hard disk means they have one less component that has a high failure rate and ultimately it means less power consumption as well.

Less Power Consumption

Typical thin client devices consume anywhere from 2-10 watts. With a typical PC systems consuming anywhere from 20-60 watts it's pretty clear why an organization would want to standardize on thin clients vs full blown PCs because power consumption translated to real cost savings. Now, in order to say that thin clients are more cost effective than PCs when looking at power consumption one must also compare the back end infrastructure power costs when calculating power savings, but in most cases you can still save power even with the back end infrastructure considered.

Less Cost for i.m.a.c.

No I'm not talking about Apple's latest all in one PC. i.m.a.c. is an acronym that stands for Install, Moves, Adds, Changes. Arguably a thin client substantially reduces the amount of time it takes to deploy an asset, move it from one place to another, swap out a failed one, etc. Just because you've made the endpoint device super easy to manage does not mean you necessarily have made the desktop management method any better. Therefore, implementing thin clients to replace PCs does not eliminate the burden associated with managing the Windows instances unless you've take steps to minimize the administration of your windows instances through things like PVS, Common Image, Layering, App Virtualization, User Virtualization, etc. That being said there will still be time saved in the actually effort require to deploy the asset itself.

"The Last PC you'll ever buy!" Really?

"The last PC you'll ever buy" was a common sales tactic among early thin client manufacturers and yet that never did happen. There are several reasons why this never happened:

Relative Immaturity of Remoting Protocols and Fragmentation

In the early days of Citrix WinFrame, Citrix did a pretty efficient job at remoting basic Windows applications. The reason for this is a majority of Windows applications were made up of GDI objects painted on the screen with a handle of bitmap graphics here and there. Animation and video were almost non-existent and even graphics were fairly simple in terms of dpi and color depth. Citrix had a technology within ICA that allowed for a Terminal Server host to send down graphics commands a primitives to the endpoint device. You can think of GDI primitive remoting like this. Let's say you want to create a Window in a Windows app that creates a display area that is 800x600 pixels with a scroll bar, maximize/minimize/close buttons, etc. When that Window is displayed (or painted) it's done so using a set of Windows APIs used for painting GDI objects. Once those items are to be painted on the host system, you need to find a way to get them to display on your remote client device over your remoting protocol. This can be done one of two primary methods:


  • Intercept the API call to paint the window and transmit it to the client device where that local operating system will in turn process the API call to paint that object in that position. This is GDI primitive remoting.
  • Bitmap Remoting is the other method for getting host screen content onto the client device.


Bitmap remoting is, for a lack of a better term, taking a screen scrape of the host side system and then sending that graphical image down the the client device to be displayed. GDI primitive remoting turned out to be the best way to display static screen content onto a remote system and it worked very well over limited bandwidth connections that were quite popular in the days of dial-up modems.

The ability to do GDI primitive remoting only worked on a system that could understand painting GDI objects. Windows-based thin clients could do this. Linux thin clients could not and therefore Linux thin clients would always receive a host rendered bitmap object rather than a GDI primitive. This took more bandwidth on the wire and didn't look quite a fluid as GDI primitive remoting. In my opinion, this is the reason why Linux-based thin clients never really took off well early on. Having multiple different client operating systems with different rendering capabilities also leads to fragmentation in terms of what is capable on which operating system. This fragmentation creates a lot of confusion among customers about what capabilities of the remoting protocol they are going to be able to take advantage of. This fragmentation is also present in areas like printing support, too.

Application & Web UI Complexity

Application and Web UI in the early days was quite simple. It was often quite easy to achieve very efficient rendering of ICA traffic with very little bandwidth. However, in the early 2000's more an more of the web started to contain animated GIFs, video, Flash, etc. Years later other technologies like Silverlight, HTML5, etc continued this trend. Due to this graphical richness, the capabilities of the endpoints needed to evolve in order to cope with the demands of this higher level of graphics performance. Unfortunately in the case of thin clients, the CPUs weren't up to the task and since most of them lacked a GPU there was no ability to offload to GPU to assist in the graphics processing. Thin clients were hardly the last PC you'd ever buy.

Commoditization of the PC

When PC devices were selling for $1000-$1500 and thin clients were available for around $600-1000 it seemed quite compelling to go the route of thin clients and get rid of those fat clunky PCs. However, something remarkable happened to the PC market. It quite literally collapsed. PCs became commodity. With that commoditization, the market for thin clients eroded. It's not uncommon today to have a full blown business PC available for purchase for around $400-600. The thin client market is all over the place and there are some thin clients available for as low as $100-200, but the premium thin clients are still in the $400-$800 range. Given that a PC is quite cheap and the desktop management tools are quite good, it becomes a difficult struggle to recommend thin clients in the face of a well-managed desktop. What doesn't help this fact is that thin clients still require management themselves. Thin clients still have software on them. Software that has to be updated for security vulnerability as well as software enhancements to support the latest /greatest enhancements from Citrix and other vendors. The software process to update thin clients is of course completely different than the process of updating software on existing desktop PCs. Unless you were successful at switching to thin clients for 100% of your users, you now have a fragmented management strategy where you now have to keep two systems management products in place. Also, some thin client manufacturers charge additional money for their thin client management platform.

Thin Clients are NOT future proof

One of the long standing myths about thin clients is that they are "future proof" and have a 7-10 year life vs a PC that only has 3-4 years of life. Ask the people who bought the Wyse Xenith felt when the Xenith Pro was launched. The bottom line here is that a thin client is only as future proof as the components contained in the system. If the thin client manufacturer cuts corners and puts a low performance Via, ARM, Intel chip in the system or a low end graphics processor chances are that thin client has no greater lifetime than that $300 Dell PC you can buy online. In reality, the Dell PC probably has a long life from a capabilities perspective.

Enter the HDX Ready Thin Clients

Given the concerns I outlined above, Citrix embarked upon a marketing campaign in 2009 that included a component called HDX Ready to denote thin clients that met a minimum set of system specifications that would result in a good user experience when used with XenDesktop 4. Citrix began calling these thin client devices Desktop Appliances to separate them from the perception that a thin client was an underpowered device. All the HDX Ready certification really means is that the thin client device has a higher level of CPU/GPU power to support modern "high definition" desktop experiences. Again, this was really a marketing campaign about distancing HDX Ready thin clients from the legacy bunch of thin clients that were not high powered enough to run a "high definition" user experience. The biggest issue that plagues these "HDX-ready" thin clients depends on your definition of HDX. The important thing to take away from this is that "HDX is not HDX and not HDX". What I mean by this is several of these "HDX" thin clients only support a subset of features present in HDX. Check with your vendor to see whether or not they support UPDv3 printing. Since that solution is based on a Windows EMF driver, chances are that Linux based thin client won't support it. Maybe a big deal to you, maybe not. What about Flash redirection, etc? Do you homework carefully. Bottom line, HDX ready is nothing to see here folks.

Enter the Zero Clients

What's a zero client you ask? Ummm that's kind of a difficult thing to answer because it sort of depends on whom you ask. In reality there's two types of zero clients out there today.

True Zero Client

A hardware thin client device that does not contain firmware (aka software) that you need to update. One device in this class is the Pano Logic devices.

Psuedo-Zero Client

A hardware thin client device that does contain firmware, but doesn't leverage the traditional firmware update procedure but rather updates its firmware by receiving it via PXE. Wyse introduced the Xenith Zero Client at Citrix Synergy in 2010 that was this category of Zero client. This doesn't technically qualify as a zero client as there is still a software stack on the devices that you need to update. However, given the ease at which you can update the software it certainly makes it simpler than the traditional firmware update methods that have made thin clients difficult in the past. One big risk that you need to be willing to accept when adopting this form of zero client is that you thin client infrastructure is now heavily dependent on availability and health of your PXE services for their updates.

Enter the Citrix HDX SoC (System-On-Chip) Thin Clients

Lots of clients and associates have been asking me for my thoughts on the recently announced HDX SoC design. That will have to wait for Part 2...

Stay tuned..