You know why I like blade workstations? Because they’re predictable.

Can you believe that it’s been over five years since we wrote our first article comparing TS-based SBC solutions to workstation-based SBC solutions? Of course in those days we were talking about PC blades in the datacenter rather than VMs, but the idea was the same.

Can you believe that it’s been over five years since we wrote our first article comparing TS-based SBC solutions to workstation-based SBC solutions? Of course in those days we were talking about PC blades in the datacenter rather than VMs, but the idea was the same. (Actually, we could probably go back even further. In 1998 I installed a 32-blade Cubix rack with 386 processors running Windows 95 and PCAnywhere. Each blade had its own modem, and our “connection broker” was the phone switch that routed incoming calls to an open line.)

Anyway, once hardware virtualization became more popular, the blade PC concept morphed into the VDI concept, and blade PCs faded into obscurity. In fact if you mention “blade PC” to most people in our industry, they would probably think, “Why would anyone buy a blade PC? VMs are so much cheaper?”

That said, blade PCs do have some advantages, even in 2009.

The first advantage is that a blade PC is an easy concept to understand. You can go to any CIO and say “blade PC” and he or she will know exactly what you’re talking about. Try doing that with “VDI,” “desktop virtualization,” or “server-hosted desktops.”

And then there’s the fact that in today’s world, a physical desktop (like a blade) can still have a virtual disk that’s mounted on-demand. When blade PCs first came out years ago, you either had to (1) give each blade its own hard drive, and either force users to share images or assign blades-to-users one-to-one, or (2) you had to use expensive SANs that could allow the blades to mount different LUNs on demand.

But today we have things like Citrix Provisioning Server which can integrate with the connection broker to make boot-time decisions about which disk image a device should mount, and we have Windows 7 that can natively boot from a VHD file instead of a full file system.

On top of all of that, though, I think the biggest advantage to blades in today’s world: they’re physical hardware of a known quantity. If you buy 50 blades, then you know you can support 50 concurrent users. No more. No less. Compare that to VDI solutions when everyone’s trying to calculate how many users per core they can get, and of course there’s a user experience curve and the vendors are saying one thing and the consultants are saying another and you really don’t know who to believe or how many servers to buy... You can avoid all of that by going to blades on the back-end instead of VMs.

Sure, it’s possible that your blade solution might be a bit more expensive up front, but there are a lot of people out there who are willing to pay a bit more up front to get a guaranteed known quantity. And if you go with blades, you don’t have to worry a one powerful user taking down another user or one app breaking everyone else.

The bottom line is that I’m not saying you should never use VDI or that blades are always better than VDI. I’m just saying that I used to think blades were crazy in today’s VM-based world, but honestly, I could see several cases where blades make sense over VDI—and not just where you have to use blades, like the 3D graphics apps—I mean where a customer could choose blades or VDI, and they would choose blades.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

How many CIOs have you spoken to in the last year that would even consider this at meaningful scale?

I know many CIOs who understand all the models and will tell you that blade PCs is not something worth investing in. Learning to make virtualization work is more important for their entire organization. Spending a little more on blade PCs just because your organization can't figure out capacity management makes no sense.


CIOs are not so dumb you know mate... Their priorities are just different from us the techie guys.

PC Blade, VDI, SBC are just methods to ultimately bring back the data back to the datacenters.



Well first up let me state that I am biased as a HP employee that installs BladePC's and BladeWS's.

Having said that, when folks are prepared to look beyond the traditional PC, but still need/want/insist on the users experience being almost indistinguishable for the normal PC, then Blades with RGS as the remote protocol deliver a lot more than most other solutions promise.  

When you start being asked to deliver high graphics applications in Mining, Oil and Gas, Animation and similar verticals I feel pretty confident that the solutions we offer can deliver the customers requirements today.

My 2 cents, for what it's worth, ;-))


Dave Caddick


True indeed Dave - But the PC Experience, with Windows 7, is changing I feel and the abilities of the hardware and software will one day soon align.

"When a tech product loses interest, it becomes useful"

I am big fan of VDI with added content redirected rendering with products like RES.  The future looks bright, the future's VDI.


Saying the Dave, I myself am trying to reconcile the fact that I don't feel comfortable at the moment implementing VDI for the average user without RES, plus AppSense, plus Cisco WAAS, plus as you say, a special protocol enhancer like terandici or rgs....    

Is VDI ready now if it needs all the above so called, enhancers..... I recall the days of installing WinXPe and walking away, outside of printing, but anyway, we all know that.


I think the article says more about the difficulties of VDI than it does the strengths of physical machines.  But it is all the same - centralizing processing power and data in the data center.

I work 24x7 in a VDI environment so I understand my perspective is influenced, but VDI is only as unreliable as your infrastructure allows it to be.

Selling the concept of VDI is always a challenge.  We sell remote computing as a service and tend to not get into the details (unless asked directly) about how/what we virtualize with.

Again, our case is somewhat unique, I am sure.


Comming from an R&D background with Blade PC's, I can say I would not even consider them.  

Workstation Blades are a completely different story, however.  

Personally I think if I had a highly visible Remote Desktop based project, that absolutely needed to be a success story, I would go the physical route with WORKSTATION BLADES.  Blade PC's is pretty much not worth anyones time.  I have seen them both in production as well as various VDI production environments.  Bottom line Blade PC's are a waste of everyones time.


Perhaps a biased opinion also (as I work for Amulet Hotkey) but what we constantly find, is that there is a requirement for both solutions. Remote workstations (on a one-to-one basis) are very necessary for certain environments. We find this especially true in the financial sector, plus other sectors where performance is critical (both video and compute). This requirement is not going to go away anytime soon. Although we can remote PC’s of any form factor, blades are the most frequently used for large projects. If I want to remote 500 standard workstations, then the storage of these PC’s can be a challenge. Squeeze these workstations into a blade form factor, and the advantages are obvious.

The benefit of PCoIP in this instance is that the desk portal device is an unmanaged piece of hardware; it’s not running an OS, and even for quad-screen uses very minimal power. As its IP based, the solution is flexible and can address DR situations and remote offices.

PCoIP is ideal for this one-to-one task, as it provides unrivalled performance, and operates at a purely hardware level. The advantages of this are many; no extra drivers or software on the computers, no load on the CPU, and the quickest encrypt/compress speeds. To integrate, you need not even change your core build or provisioning methods. There are no known unsupported USB devices, and all software/codec’s etc work as with a local PC. This also means it’s entirely OS agnostic (even have remote blades running on Win7 RC 64 here). From a security perspective, you can even separate the PCoIP network from the corporate LAN meaning that network you are most concerned about, need never leave the data centre.

Remote blade solutions such as the DXM600 solution are there to cater for the users who need their PC to perform exactly as if it were under their desk, but with all the advantages of remoting. VDI however, is still a definite necessity for those remaining users who don’t quite require that same performance. Often this is a high proportion of users in a large corporation. Later in the year PC-over-IP will be integrated into VMware View, meaning VDI’s can be distributed using the protocol. This will provide a higher level of performance than current VDI solutions, and should address those back office/middle office users more effectively.


I've installed both PCoIP and Cubix's "PCoPCIe" (PCIe over fiber). Both thechnologies have their strengths and weaknesses. If I'm planning a 3D CAD/MCAD/CAE rollout, and file sizes are going to be large, I'm going Cubix over fiber for some obvious reasons. If files are not so large, or resolution / frame rate support isn't so important, PCoIP is cost effective. Both technologies achieve the same objective - centrallizing all of the hardware and software assets for greater manageability and support.

I understand that Cubix now offers solutions based on both technologies, which may be the way to go for larger projects where user profiles can range from typical office applications with 1-2 displays per user, to the power-users requiring 3 or more displays.