Are blade workstations an anachronism in today's virtual world?

ClearCube is a desktop virtualization company that sells turn-key solutions based on hardware zero clients connecting to a combination of VDI desktops and physical blade workstations.

Recently I had a conversation with ClearCube CEO Randy Printz. ClearCube is a desktop virtualization company that sells turn-key solutions based on hardware zero clients connecting to a combination of VDI desktops and physical blade workstations. But in today’s world of virtual desktops, is there still a need for blade-based solutions, or is this a relic of the past?

ClearCube has been making blade workstations and clients with chip-based PCoIP for quite a while now, since back before Teradici partnered with VMware and there was no software-based PCoIP. These days, much of ClearCube’s business is with high-security industries like finance, government, and defense—areas where the physical separation of the back-end blades makes a lot of sense (and is often required by law). They have specialized products for these industries, such as blades with physical jumpers that will lock out USB drives on the client end and a multi-network client device with a built-in KVM.

Looking forward, ClearCube is anticipating increased sales of their zero clients thanks to prices that should be coming down soon. But when thinking about blade-based desktops versus VM-based desktops, I wonder if the blades really have a chance in the long term? Virtualization is always getting better and more cost efficient, and in fact two more historic “blade only” use cases have been knocked out of the picture by recent announcements.

The first use case is environments requiring high-end graphics. Doing this with a remote desktop used to require a blade workstation with a physical GPU, although that’s starting to change now. At Citrix Synergy last May, Citrix showed a preview of HDX 3D Pro that allows a physical GPU to be passed through the hypervisor and exposed natively to a virtual machine. The relationship of GPUs to machines is still 1-to-1, but now at least they can be virtual machines. More recently VMware announced that they will support NVIDIA’s Virtual Graphics Platform, which will also result in GPU to VM pass-through.

The other recent change in the “blade versus VM” game is that using PCoIP hardware encoding for the remote desktop used to be limited to blade environments with a physical PCoIP add-in card. But at VMworld last year, Teradici announced the Apex 2800 PCoIP server offload card. This single card can offload up to 300 million pixels per second of PCoIP encoding, typically enough for 50 or 60 users to share the same card. Better yet, the offloading is seamless and dynamic—there’s no need to log off of a VDI session in order to log on to a blade.

Of course, somebody will have to put all this together, and in the meantime maybe a blade is still the best way to do it. And we still need blades for situations where physical separation is mandated for security reasons. But now that we have GPU pass-through and hardware PCoIP offload capabilities, are blades becoming an anachronism, or am I being too quick to dismiss something that’s still useful?


Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

I agree 100%. The blade use case has always been for performance workstations. But virtualization is really good right now, I agree that the vGPU is the only thing holding it back. Once we get that, why would anyone use a blade workstation?



The great thing about blade workstations is their ability to host (and power) monster GFX cards... Hardly any server class blades or bricks can do this (today)...

Look at HP their recommended and certified RemoteFX offering is a blade workstation just running server OS instead of client based. Ok this isn't 1-1 GPU pass through but it's a new use for blade workstations. No reason why this couldn't be used for the HDX or Nvidia offering for 1-1 either.

As standard servers become GFX friendly I think this will change and blade workstations may become a very limited use case. We have some apps that require s**t loads of cores and memory in addition to bitchin GFX cards that wouldn't fit (or be powered) by most servers... Apps that hammer disks in terms of IO, etc. At what stage does it become "worth it" to try and virtualize that sort of workload? When the tin can only handle the one instance of the OS anyway?. One day maybe (I hope)..

An example of this could be having an open access area say with 500 workstations that all can happily use standard RDS/VDI but out of 500 CCU users at least 1 or 2 need the specialist application that needs the horse power... I'm buggered if I'm going to go to the effort (with todays tools) of trying to virtualize that when it's simpler (cheaper? (license costs) to just stick in a blade workstation.


If you think about it VDI has traditionally been sold to the 90%(the workloads it can easily handle).  The 10% were intended to be put on blade systems, If we really get into the "who are the 10%" I think we will see what we expect to see: AutoCAD users, Graphics Artists, financial sector, government.  Although I think the vGPU will make inroads on these 10%, I don't think its the end of blade workstations.  I think the legal challenges, security challenges (although I don't see this as a technical security challenge, more of a comfort level), and in the end performance challenges will keep these workstations around for the long run.  If you think about it, the reason most companies choose blade workstations has nothing to do with reducing cost, its about increased security.  I don't think the concept of shared resources is something some of these companies likes to hear, sharing and editing the next X-Men movie do not fit in the same sentence.  Or in the financial sector security is the highest concern, although I know from experience these machines could easily be virtualized, as soon as the security person looks at the model it gets highly scrutinized.

So I still think these companies have a healthy future.  The blade is here to stay for one, and two the vGPU is not the only thing keeping the 10% from being virtualized.  Maybe we should change our pitch though, maybe VDI is for the 92% now.  :)



Thanks for posing the question.  It is one that we ask ourselves and our customers often at ClearCube.  While our VDI solutions are driving the high growth segment of our business, we're continuing to see double digit growth in our blade workstation sales. You and Gunnar are right that security and GPU performance are the top requirements that fuel adoption of blades.  However, we also have a lot of customer use-cases where a segment of their end-user base utilizes legacy applications that were developed for a desktop chipset or they have a set of peripherals that incur performance limitations in a VDI environment.  In these instances we'll help them deploy VDI for the majority of users and then either setup a pool of blades to support the outlier use-cases when needed or provide dedicated blades to the persistent power users.  The elegant aspect of this VDI and blade environment is that the same zero client can be used to access either a VM or a blade and the entire environment is brokered and managed by VMware View.  We're excited about the prospects of vGPUs for our business; especially NVIDIA’s Virtual Graphics Platform and Teradici's Apex 2800 but with that said we don't see extinction of the blade workstation in the near future.

Best Regards,

Jeff Fugitt

Vice President, Marketing

ClearCube Technology          


Anyone who says VDI will meet 100% of an Enterprise's needs is selling snake oil.  For the foreseeable future, there will always be one-offs.