What’s the future of Teradici? They bet on blade PCs, but the world went to VDI. Now what?

Teradici lived in relative obscurity until late 2008 when VMware announced they were licensing Teradici's PC-over-IP (often shortened to PCoIP*) remoting protocol for inclusion in View 4.0 (which was released last November) to compete against Citrix HDX.

Teradici lived in relative obscurity until late 2008 when VMware announced they were licensing Teradici's PC-over-IP (often shortened to PCoIP*) remoting protocol for inclusion in View 4.0 (which was released last November) to compete against Citrix HDX. Now that VMware's version of PCoIP has been out for a while, what's the deal with Teradici? What does their future look like as a company? Was their deal to license PCoIP to VMware the most brilliant or the most stupid thing they could do?

To understand this, we have to take a step back and look at how Teradici and PCoIP came to be.

The origins of Teradici and PCoIP

Since most people today think of "VMware View" when they think of "PCoIP," that means that most people associate PCoIP with VDI. But VDI didn't exist back in 2004 when Teradici was founded. In those days the only Windows flavors of server-based computing were Terminal Server-based solutions like Citrix MetaFrame. Of course the reasons people used Terminal Server in 2004 were very similar to the reasons people use VDI today: Management, Access, Performance, and Security. The problem back then was that many applications were not compatible with Terminal Server—both for performance reasons and for lack of multiuser support.

Even though the concept of VDI as we know it today was still a few years away, a lot was happening in 2004 that would ultimately lead to its creation. First was that the blade form-factor was starting to catch on as being more efficient and flexible than rack-mount servers, and second was that Windows XP was starting to catch on and included built-in remote desktop capabilities.

It didn't take long for people to realize they could combine the concept of server-based computing with blade servers and Windows XP to create a "blade PC" or "workstation blade." (Check out what I wrote about Blade PCs back in April 2005.) Blade PCs offered the management and security benefits of server-based computing while allowing financial and engineering users to have the power of individual workstations running apps that wouldn't work on a Terminal Server. There was even a Blade PC industry consortium and conferences dedicated to the blade PC concept.

Probably the biggest thing that sucked about blade PCs in 2004 was that the remoting protocols weren't good enough to deliver the graphical experience that customers needed (not to mention the higher resolutions and multiple displays that were working their way into the mainstream). Ironically the reason people used Blade PCs instead of Terminal Servers was often due to intense application requirements, and these were the same applications that the remoting protocols of 2004 couldn't deliver. D'oh!

So back then there was the perception of a huge opportunity for someone to come along and create an amazing remoting protocol that would rock for intense graphics. Microsoft wasn't going to do it because they hadn't yet realized that RDP could be strategic for them. Citrix wasn't going to do it because they had the Terminal Server-based MetaFrame franchise to protect and they didn't see the blade PC as being serious competition. So who did that leave? Probably the only serious effort was from HP, since as a manufacturer of both blades and thin client devices, they wanted to see workstation blade solutions succeed. HP developed their own remoting protocol called "RGS" which was aimed towards the higher-end remote graphics market. RGS was entirely software-based and worked pretty well, although it required LAN connectivity to deliver a true local-like experience.

This is the environment into which Teradici and PCoIP were born. The founders of Teradici believed in the promise of the blade PC solution, and they felt that if they could create a purpose-built protocol that's built for remoting the entire Windows desktop experience—graphics, USB, multimedia, everything—then they could have a leg-up on everyone else. Teradici also felt that only way they could get the performance they needed was to build hardware chips that would be used in pairs—one in the remote host and a second in the client device—which would handle all the encoding and decoding of their protocol.

Teradici's business model would be that they would just design and build the chips. Then they'd sell them to blade and thin client makers for inclusion in their own devices.

A lot has been written over the past six years about whether Teradici's decision to create a chip-based solution (as opposed to a software implementation) was a smart move. What's important to remember is that in 2004, Teradici designed PCoIP for blade PCs. And in a world where each user had his or her own physical blade, then adding a special chip to the host wouldn't be a problem. (The host blades used for blade PCs were already evolving to be different products than server blades. For example, workstation blades typically needed less storage and more graphics capabilities than server blades, so if a customer was going to add a PCI graphics card to a blade, then choosing one that also had a PCoIP chip in it would be no problem.)

The blade PC's future is disrupted by the hypervisor

Everyone knows how disruptive hardware virtualization was to server, storage, and OS vendors. But the hypervisor screwed up the plans of lots of little niches too, like the blade PC segment. Right as blade PCs were starting to gain some momentum and as companies were starting to solve the last problems of connection brokers and graphics remoting, VMware was offering a serious server virtualization platform as a way to consolidate and economize all those underutilized servers in the world's datacenters.

So 2006 saw the emergence of the "virtual" blade PC (a.k.a. "VDI") with all the benefits of the blade PC but with the advantages of VM-based PCs instead of blade-based PCs. For Citrix and Microsoft—who had been largely ignoring blades up to this point—this meant nothing. To VMware—who had ignored the blade PC segment—this meant a new opportunity. And for Teradici—who bet the company on blade PCs that each ran on their own physical blades—this meant some sleepless nights.

Not only had Teradici decided to build their blade PC remoting protocol solution as microchip hardware instead of software, they decided to build it with ASIC-based microchips. To most people, a chip is a chip. It's little and black with metallic legs, and if you crack it open it you'll find innards that look cool under a microscope. But not all chips are created equal. In the microchip world there are several different types of chips, two of which are the ASIC and the FPGA.


  • The ASIC (which stands for "application-specific integrated circuit"), is probably what most people think of when they think of how microchips are made. A full design is created first, and then a custom chip is mass produced for that exact design. For the purposes of this story, the ASIC is 100% custom. The chip can do exactly what the designers want in any way they want, and it can get the best performance from the smallest package with the least power. The downside to the ASIC is that it's expensive to produce (due to all the testing since you don't want to find a bug in the hardware that would require a redesign) and there's a long turnaround time, typically 24 months from the time the design begins until the time the finished chips start coming off the assembly line.
  • The FPGA (which stands for "field programmable gate array") is an interesting take on the chip. Since all microchips are a series of millions of very simple logic gates (made up of transistors), the FPGA is a sort of generic chip package that can act as a template which can be customized to a particular job after the chip is built. FPGA makers produce template chips with millions of gates and pathways and everything all ready to go, and then the FPGA is literally "programmed" by a customer. (Or, more appropriately, the gate design and electrical connections are "burned" into the pre-built template chip which can then act as the chip was designed). The nice thing about FPGAs is that they can be programmed quickly, so you could have engineers who finalize their designs in the morning and come back after lunch to plug their new chips into the devices for testing. (Think of an FPGA kind of like a breadboard on a chip.)


The ASIC-versus-FPGA differences can be visualized like the differences between a stamped DVD (the ASIC) and a blank burnable DVD (the FPGA). The stamped DVD is better for high-volume mass runs and can be customized for different formats and with special capabilities, but it takes a few weeks to make and has higher fixed costs. Burned DVDs can be made in just a few minutes but with less flexibility for the creator.

Teradici's PCoIP chips are ASICs, not FPGAs. This is almost certainly due the the fact that the Teradici chips are highly specialized, including USB controllers, video inputs, Ethernet controllers, firmware, audio controllers, etc., all in the single chip. It's unlikely that a single FPGA template chip exists with the right parts that could do everything that Teradici requires a the speed they need it. In many ways the specific chip architecture that Teradici chose doesn't matter, but in the context of this article it's a big deal since going the ASIC route probably adds two years to the product development lifecycle of the Teradici chip products. Then on top of that you have to keep in mind that Teradici doesn't actually build any devices—they just supply the chips. So once their chips are available they're just sent to other manufacturers who have to do all their design and testing and manufacturing, and all told Teradici is probably looking at a three-year cycle from development through shipping for products that use their hardware chips.

The PCoIP evolution

The good news for Teradici was that when the first products that incorporated PCoIP chips started appearing in the middle of 2007, people generally saw that they did what Teradici said they could do. The bad news was that by this time people were already starting to ask Teradici questions about how these chips could be used in for VDI environments. Teradici's answer? They can't.

The best Teradici could do is to say that they're working on host chips that can support multiple clients, but that wasn't revealed until late 2008 which puts that product availability back to 2011.

At this point it's easy to see how Teradici and VMware got together. On one side you have VMware, a company with a VDI solution in need of a remoting protocol, and on the other you have Teradici, a company with a remoting protocol in need of a market. At first glance it seems like a match made in heaven. But reality is a hard mistress. VMware's whole company is about diminishing the value of proprietary hardware with commodity software, and Teradici is all about using proprietary hardware to solve a problem that's approaching "solved" status with commodity software.

What could possibly go wrong with this plan? ;)

VMware + Teradici =

As I wrote in the intro to this article, VMware and Teradici teamed up in 2008 to announce that they were going to work together to build a software-only implementation of Teradici's PCoIP remoting protocol that VMware would include for free in a future version of their VDI product. From a business standpoint this makes a lot of sense. But from a technical standpoint... yikes!

Imagine the challenge of taking a remoting protocol that was designed from Day 1 to run on custom hardware and porting it over to software? Sure, they can make it "work," but will the physical server resources exposed to the VM be able to deliver the experience that PCoIP has with the chip-based solutions? Will they be able to do it without taking too many server resources which would severely impact user density? (After all, you don't want a situation where a VDI host server can run 50 users via the RDP protocol but only 25 users via PCoIP.) And finally, is the actual PCoIP protocol flexible enough to work via VDI's "new" requirements, such as via WAN connections and with the various network accelerators?

After more than a year of development, VMware shipped View 4.0 last November with the software version of PCoIP built-in. So the answer is "yes," the engineers can get the protocol to work in software. And as Gabe and I learned during our Geek Week: VDI Challenge, yes, it is possible to get the same experience with the software-based implementation of PCoIP versus the hardware-based solution. (Although for our Geek Week: VDI Challenge testing, we had zero latency, unlimited bandwidth, and just a single user on our host server.)

In the real world it seems that the software implementation of PCoIP isn't as flexible as RDP or HDX. If you do some forum searches and talk to customers who have evaluated the software PCoIP that ships with View, they report that enabling PCoIP does have an impact on user density (i.e. fewer users per server with PCoIP turned out) and that it doesn't work as well when several users share a WAN connection. (For the record, VMware responds by suggesting that it's not an Apples-to-Apples comparison, since people typically try to push PCoIP harder than RDP which leads to more intensive use resulting in fewer users per server. They also point out that while PCoIP might consume more host resources, it delivers a better experience. So if user density is more important than user experience to you, then go ahead and continue using RDP.)

It's also clear that PCoIP was never designed to be used across WAN connections. (I guess that's obvious since Teradici only released a firmware update that added WAN support a year ago.) Even so, PCoIP suffers from the fact that it's encrypted end-to-end, a nice plus for a LAN but a problem on the WAN since it means that WAN accelerators can't peek into the PCoIP packets to compress, cache, or re-prioritize specific elements. (Besides, most WAN connections are encrypted with some kind of SSL-VPN anyway; it's not like customers need that from their remote display protocol vendor.) Teradici is also UDP based into of TCP based, a characteristic that's also great for the LAN but not good for WAN scenarios. This means they can't easily incorporate the various client-side rendering components that they licensed from Wyse.

I'm not writing this to slam PCoIP or to say it's bad; I'm just pointing out that it was designed for LAN environments. (Ironically VMware's marketing department likes to say that PCoIP is the only remoting protocol that was designed from the ground-up to be a full Windows desktop remoting protocol. This statement is actually true, although if they want to talk about today's implementation versus purity of design, then they should also mention that PCoIP was designed for high-bandwidth low-latency LAN connections.)

Despite all these challenges, VMware's software implementation is about all you hear about PCoIP anymore. Sure, there are still some workstation blade solutions on the market that leverage the Teradici chips, but physical blade workstations are like physical servers—there will always be a niche, but that ship has sailed.

What does Teradici do now?

There's disagreement as to whether Teradici's decision to license PCoIP to VMware was brilliant or stupid.

Those who believe is was stupid suggest that Teradici licensed away their crown jewels, and that now that a software implementation of PCoIP is out there, the hardware business will die. When that happens, what does Teradici have? A protocol that was designed to run on hardware that now only runs as software that's controlled by someone else? Big deal!

Those who believe that Teradici's decision to license to VMware was brilliant argue that Teradici had their backs to the wall and that VMware was their best (or only?) option. By the time Teradici started shipping their chips in 2007 they were already behind the times. How could Teradici have said "no" to that licensing agreement? Add to that the fact that while VMware needed a protocol, they didn't need Teradici's protocol. VMware could have licensed RGS from HP, and there's an argument to be made that that would have been easier since RGS was already software-based and at that point had been deployed to more production seats than PCoIP.

And let's not forget the financial side of Teradici. To date they're received $63m in funding. Let's assume they sell their chips for $20 each (which is probably high). That means they have to sell 3m of them just to get the revenue back to cover the initial investment, and they have to sell 6-9 million of them to make their investors happy. And in a world that's quickly moving away from host side chips for remoting and is threatening to move away from chips for the client, that 6-9 million target looks tough. So why not leverage VMware to sell more PCoIP sockets, even if they are software? Then again, if VMware only pays half the price for a license as a chip, now Teradici needs 12-15m VMware View users to make the investors happy.

And frankly now that Wyse has announced their Wyse Zero platform with an HDX zero client for $330 (that can also do multimedia redirection by the way), how much longer can they sell the chip-based PCoIP client for $450? I would imagine that a Wyse Zero for PCoIP is around the corner, and once the software client portals catch one that will kill half of Teradici's hardware market.

So what does Teradici do now? What options do they have besides doing what they're doing? Maybe their future is hypervisor integration? Or maybe by now it doesn't even matter?

Their VMware deal is not exclusive, (although the software implementation the two companies built belongs to VMware). Does Teradici build a software implementation for someone else? Do they abandon chips altogether? Do they only focus on client-side chips? Will they exist in two years? (Or will they have been bought by VMware for an undisclosed-yet-super-cheap price?) What do you think?


*Footnote: I was contacted by Teradici's PR department recently. The requested that I shorten "PC-over-IP" to "PCoIP" instead of "PoIP." Their rationale was that PoIP is too close to VoIP. I don't see it, but whatever. If they want to be PCoIP instead of PoIP, that's fine with me. I'm happy to comply and pass it along.

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Interesting article Brian. I think buying them would be a problem for VMware as this would put them into the HW business which will make them look silly. I also assume VMware has licensed the product in such a way that any new innovation that Teradici produces they get for free. Perhaps Microsoft or Citrix should buy Teradici just to kill them and VMware View, it would a really bold move, or at the very least force VMware's hand into the hardware business or cost them a ton of money to only have them kill the HW business.......

I think Teradici made a smart move, as they had no choice. They have got a lot of press due to PCoIP thanks to VMware. Now it's true that this is a high bw low latency only solution and UDP will be the biggest nightmare for them for the desktop. So they and VMware are screwed. However if I was Teradici, perhaps they need to think back to their roots. Build stuff that is awesome with graphics and doesn't compromise with software. Perhaps desktop is the wrong market. What about gaming? Does anybody know what OnLive plan to use. It will be interesting to see how good their user experience is. I think companies like Teradici should innovate in other markets without all the baggage of desktops unless it's for very specific use cases. If you can't then by all means sell out and move on.

Also another thought occurred to me inspired by the BS comments from VMWare in your post above "VMware responds by suggesting that it's not an Apples-to-Apples comparison, since people typically try to push PCoIP harder than RDP, which leads to more intensive use resulting in fewer users per server. They also point out that while PCoIP might consume more host resources, it delivers a better experience. So if user density is more important than user experience to you, then go ahead and continue using RDP."

This is utter f'ing crap, and market BS from ahole's who try to confuse the industry. They all matter, POSoIP does not work in the real world. To help the idiot admins out there who still think it does perhaps the guys at VRC can start a new bench mark, the user experience BS index. In my own testing I have found that PCSoIP reduces scalability by about 30% server side vs. RDP. That is really a huge joke if that really mattered to me. In my case since "it's a desktop" I am happy with 1-4 users per core. However for most people who VMware lie to and say VDI is cheaper, this independent data point would be really useful. So come on VRC guys, do the industry a huge service and become the standard for user experience measurement!


VMware indeed tried to license RGS from HP, but HP wanted way too much $$.

Considering an ASIC option for RDP is well underway, I see very few options other than for VMware to acquire Teradici. Otherwise, Hyper-V will potentially leapfrog vSphere as far as desktop-optimized hypervisors are concerned. A hypervisor capable of not just virtualizating the GPU (which is indispensible if VDI is to permeate the desktop space), but offloading the remote protocol (including graphics and USB, which is very time-sensitive) to an ASIC workhorse will have a significant positive impact on the user experience, density (desktops per server), and VDI's feasibility as a whole.

I'm sure the major server vendors are considering, or are already in the process of implementing, someone's ASIC onto their future VDI-optimized servers. Since Microsoft has more leverage than VMware with the hardware vendors, chances are it's Microsoft's ASIC that those vendors are implementing. But will those vendors also consider implementing VMware's ASIC alongside Microsoft's, therefore giving their customers the option to choose? Would it not be economical enough to do so, considering the minimal impact of adding yet another chip to the server board? Will Microsoft even allow it in their licensing contract?

The other question is whether Cisco would be a Teradici/VMware ASIC licensee. Considering the Microsoft/Cisco rivalry, would VMware be betting on Cisco to standardize on PCoIP by integrating the future PCoIP ASIC into their servers?

There are many more question marks here. For example, what would Citrix do once it finds itself between a rock and a hard place? Yes, HDX is "relatively" great today, but it's not enough for VDI to displace the traditional physical desktop. Citrix doesn't have the sort of leverage with server vendors that Microsoft does, and therefore it's fairly reasonable to assume that an ASIC option from Citrix is pretty much out of the question, especially since another point of contention with Microsoft is the last thing Citrix would want. And if Citrix doen't do this anyway, what would the future be for HDX, and also for XenServer as far as the desktop space is concerned?

There's never been a more critical time for Citrix to start shopping itself around. Will it be Microsoft in 2010? And will the Citrix Nirvana story be compelling enough to finally make this much-anticipated acquisition a reality?


I agree with appdetective on the gaming market. I think I've commented on this gaming topic before. The gaming market develops in much faster cycles than the corporate desktops and moving games into the cloud is probably a little less complex than corporate desktops, as gaming is solely consumer oriented and you only have to deal with a couple of distributors for licensing purposes.

However, I haven't seen Teradici mentioned in relation to any of the two gaming service providers (OnLive, OTOY). But both companies are launching their services in the next few weeks. So, it will be very interesting to see how they deliver on the streaming of HD content over the internet.

Also, iPeak Networks might find itself in a good position to enter the gaming market as well. :)


Nice article brian!

(Disclaimer: VMware employee)

Interesting article, just some things I want to highlight:

VMware View 4 with PCoIP was released in November 2009, so about 8 months and not a year as you write.

When it comes to the "designed for WAN" I don't really agree with you. Its built to handle high latency (WAN) and in my part of the world this is the challenge and not bandwidth you can always buy more bandwidth but its harder to buy lower latency...

Another thing I disagree is the belief that a WAN accelerator would do a good job in optimizing PCoIP. As you probably know PCoIP only sends changed pixels and that PCoIP itself optimizes the traffic.

Why should a WAN accelerator do a better job in optimizing a UDP stream then PCoIP itself?

One last thing, WYSE have had a Zero client with a teradici chip in it for some time now, its called P20.

I guess they thought the concept was good and wanted to expand it further...

// Linjo


By acquiring Teradici, VMware wouldn't necessarily become a H/W vendor. Instead, they would further demonstrate their commitment to the desktop market. Teradici would become the "remote protocol" division of VMware, offering its ASIC designs to H/W vendors wishing to license it.

I'm not a fan of PCoIP in its current state. However, an ASIC option, as I wrote in the previous post, is imminent. Desktop-optimized servers are right around the corner, and GPU-awareness in the hypervisor will soon be a reality. The video adapter is already being virtualized by the hypervisor, and therefore the bulk of the remote protocol work will be offloaded to the chips sooner than later. This would constitute an important milestone in the evolution of VDI.

Regarding LAN vs. WAN, it's not as important as a lot of people think. For that, we have to go back and revisit the catalyst that ignited the VDI spark. Although it looks like Terminal Services on the surface, VDI is strategic, not tactical, for most organizations. Terminal Server is a tactical solution in 99 percent of the environments in which it's been deployed. On the other hand, VDI is seen as a desktop replacement opportunity for VMware, and a major disruption to the status quo for Microsoft. VMware has estimated the VDI opportunity to be as much as 20x larger than the server virtualization opportunity. Otherwise, if it were only a variation of Terminal Server, they wouldn't have bothered. Likewise, Microsoft wouldn't have bothered rethinking their strategy and realigning their priorities had they not seen VDI as a significant paradigm shift. It was a big enough disruption to cause Citrix to reinvent itself by going out and acquiring XenServer.

If you agree with me that VDI is a long-term physical desktop displacement strategy, then you have to also agree that no organization is going to migrate its entire IT infrastructure to an external cloud. This is not going to happen anytime soon. Therefore, we're going to see significant momentum towards on-premise (internal) clouds. In this case, the desktops will remain in-house nearby the end users.

For remote access, we all know that bandwidth is getting cheaper and more plentiful by the day, and WAN accelerators capable of overcoming TCP/IP's lackluster performance over the WAN are becoming more commonplace. Eventually, these acceleration solutions will be implemented in software, or even maybe in ASIC, providing a better overall experience over the WAN. Once we get there, moving the desktop to the external cloud, as well as DaaS, could become more palpable. We're not there yet.

Remote users aren't going to give up their PCs and laptops anytime soon. Therefore, local computing is here to stay, albeit it may very well include more manageability features by means of client hypervisors and other desktop virtualization approaches from the likes of RingCube and others.

There will never be one silver bullet to this very complex problem. The real solution will have to entail multiple approaches, including all of the above, as well as good old TS, app virtualization, etc.


In the ASIC war, I don’t think Teradici stands a chance against MS. Even if VMware back them. Of course that totally assumes MS can deliver something that works. I bet WYSE are busy working on a Remote FX client as I type. I don’t think Cisco wants to pick a fight in that arena with MS. It would shut down their UCS business as even more proprietary than it is today and let’s not forget $$$$. Surely the UCS guys must be looking for opportunities to be relevant in Desktop Virtualization, and to do that they must be neutral to the leaders. I think there is a false assumption that Cisco is in bed with VMWare. All Cisco cares about is selling networking gear and now infrastructure hardware. They will blow anybody to do that, just like WYSE who are every bodies girlfriend at every conference with a supporting announcement and then back stabbing at the next conference with the competitors. Slimy company.

I also think in terms of ASIC the graphics cards guys have to step up to the plate. Even then, I have to ask myself is this the right model. So I really want graphics cards in my data center servers and then pay to cool them? Client side redirection seems a far smarter approach if the CPU and graphics card is there (note death to thin clients again). I agree with the point the hypervisor has to evolve to support desktop workloads. Today too much myopic thinking that a hypervisor as is today can do everything. NOT TRUE.

Oh and to Vmware employee who wants to accelerate PCSoIP at the application layer. Please pay back my investment in WAN accelerators, and explain to me how POSoIP is going to accelerate all the other traffic on the wire? Oh yeah, real world the thing you people never understand.

@edgeseeker. I agree that for most people today, VDI is seen as a desktop replacement strategy. TS has a history of tactical and cheap and therefore those folks will stay that way at best growing the use of TS. That however means VDI in a datacenter close to the application infrastructure. I agree with you, in many cases not moving to the public resources any time soon, otherwise you just add app latency. Moving your apps will take years, unless you adopt a co-lo strategy so you don’t have to build your own datacenters. I think this is the most real cloud scenario in the next 5 years. However the way desktops are consumed internally will become much more service orientated. I think only smart companies are going to do this in the next 2-3 years. The average Joe company with dumb labor admin, very low skilled server virt guy etc will do everything to stay status quo and never be able to solve the cost equation, because they don’t understand the value and no desire to change. Weak people and followers. This will mean that even in 10 years, time, most people will still use fat PC in a distributed manner, but they for sure will not be efficient. The growth will come in new models, VDI, Web, Client side hypervisors, even growth in TS. All of it will lead to better central mgmt to reduce the cost of the PC which is way too high and cumbersome today. Most people will take 10 years to wake up to do anything...... While others will be taking their organizations to the next level. So yes it’s strategic not a pc upgrade costs analysis which is what the average idiot admin thinks and therefore fails to sell to the CIO.


@Linjo, Thanks for the correction.. This article is so long I actually referred to View 4 coming out in November in one spot and June in another. D'oh! (But I fixed it now.)

When you talk about PCoIP being designed to handle high latency, I just 100% disagree with you. PCoIP was designed for the LAN, and it wasn't until a year ago that they made some changes to allow it to work better on the WAN. (www.brianmadden.com/.../teradici-releases-pc-over-ip-for-the-wan.aspx) Now that said, I DO agree with your statement that you can buy more bandwidth but that you can't buy less latency. However that actually goes against what Teradici's WAN changes from May 2009 did (previous link) because those only focused on bandwidth. And honestly the fact that Teradici is 100% host-based means that they will never have a good WAN solution. The best hybrid approaches to the WAN today include offloading some things to the client, but PCoIP can't do that since they're UDP and they don't have guarantees that the offloads will make it there (esp. over a WAN). Of course for pure host-rendered pixels, single packet reliability is fine so UDP is fine too.

Re: the WAN accelerator not being able to do a good job with PCoIP since it's 100% host-based, you're only making the point around the pixel. But what about audio, USB, printing, etc? If the WAN accelerator could get inside the protocol, it could, for instance, lower the priority of printing traffic, or prioritize certain sessions over others. Is it ideal for everyone? No. But the fact that PCoIP encryption cannot be disabled means that no one gets this option, even if they want it.. I don't know why VMW chose to not let users disable encryption.. Maybe it's because the older PCoIP clients couldn't handle it? Maybe it's because that would expose the secret sauce to the world?

Finally, yes, I know the Wyse P20. That's the device I was referring to when I said that Wyse has a hardware PCoIP client that costs $450. So you have the hardware zero client for PCoIP for $450, and the software zero client for HDX for $330. Why should customers pay a $120 "tax" just to use a PCoIP thin client? I assume that Wyse will release a software zero client for PCoIP too, and the price better be closer to $330 or else customers won't want to pay. But if the price is cut, that's going to cut into Teradici's profits, which is the why they're in this whole pickle now in general.


very informative article.


You mention that you see GPU awareness in the hypervisor becoming a commodity and you also mention that local computing is here to stay.

What would happen if local computing is the majority of VDI deployments in the future? Thanks to type-1 client hypervisors, that would make the server-based GPU awareness feature still important because when the VDI instance *is* run on the server it will still have access to the GPU, but the feature wouldn't be as important as it's being made out to be right now.

I see a GPU aware server hypervisor an evolution of the virtualized Blade PC. Only good for VDI executed as SBC.



The GPU is important not just to accelerate the UI delivery, but also because many GPU-aware apps just won't run in the absence of a GPU. Therefore, GPUs are important no matter where the desktop will execute.



I understand that GPU aware hypervisors are required in order for the GPU aware apps to utilize it, however my main concern was the actual importance of GPU aware server hypervisors VS GPU aware client hypervisors.

It's kind of a mute point with VDI right now because the vast majority of instances are executed on the server, so it's important in today's VDI model.

But in the future for the mass deployment of VDI, *if* the majority of implementations are going to be executed on the client then the *curent* marketing of GPU on the server side is just hype IMO.

In order for me to prefer GPU on the server side than the client is if the client hardware cannot do the job, as it is right now and the way it will be in the future is that it most definitely will. Unless I replace desktops with thin clients.

Just my opinion.



You're assuming that the majority of future VDI implementations will be client-based. Why do you assume that? There are just as many use cases favoring server-hosted desktops as there are cases favoring client-hosted desktops. If you think that client-hosted desktops will be more practical, think again. Who wants to wait an hour to replicate a desktop image down to the client device? And once it's replicated, what incentive does the user have to resync the image with the server-based one? Also, doesn't running the desktop image on the client mean that you have to assume that all the apps will run in disconnected mode without requiring a backend?

A server-side GPU isn't just to satisfy the requirements of GPU-aware apps. Even the remote protocol itself can make use of a GPU by offloading the JPEG compression task to it. This is a significant workload that, once offloaded to GPUs, can positively impact server density.

As I said, tomorrow's VDI model is not a sledgehammer. Rather, it's a tool set encompassing server-hosted desktops, client-hosted desktops, hypervisor-less app and desktop virtualization from the likes of RingCube and InstallFree, and possibly other approaches that no one has cared to conceive of yet.    



I'm thinking the same thing

When the following was true SBC was a lot more attractive:

-PCs were very expensive

-SBC provided better/cheaper control/management

Now that capx cost on pcs/laptops is low, if cHv (client hypervisor) gives me centrally managed distributed computing; that’s going to be tough to beat.

Its not that I expect XenApp/TS/RDS to be obsolete, but I think the motivation to deal with the limitations evaporates.  Anywhere it is not a perfect fit, it might become hard to justify if cHv achieves its potential.



Since when does CapEx supercede OpEx in such decisions? The cost of PCs/Laptops has always been "relatively" low compared to the operational costs. That's always been the SBC industry's MO. If you look at any report from Gartner and others dating back from the mid 90's and on concerning this matter, it's clear that PC OpEx has always ranged from 8x to 15x the CapEx.


It's the apps stupid! If the application infrastructure sits far away in a data center you are going to have application latency and traffic. People keep talking about bandwidth requirements for VDI, but ask yourself how much application you send over the wire. Stick all that in the data center, why not. I have found that I send less traffic over the wire as a result and get better app execution performance. Granted I still have issues with bursty protocol traffic thanks to multimedia that needs QoS innovation ASAP,

All CHV is going to do is let you execute laptop type use cases or endpoint type use cases. It will not help your application infra. It will certainly help with central mgmt if and when it's built. VDI allows you to bring your apps closer to your desktop and do more. It's amazing that people still don't get it and think XC etc are the worlds solution. NO they solve something different namely multiple machines on a device, patching problems, lost data and devices etc. They don't make apps go faster...

Now back to GPU. It make limited sense to me to use a GPU on the server from a scale point of view. If I have clients with GPU's I bet I have more of them and hence redirecting processing to those GPU's makes a lot more sense to me and is far more scalable. To be fair, I agree if you are connecting for diverse clients with GPU HP varying server side may be better, but looking forward how true is that, unless you are locked into thin/zero clients. It is also a lot more expensive for me to cool hot GPUs in the datacenter.



GPU redirection is not always possible. Many applications do not fully make use of published APIs. Ask ThinAnywhere and Citrix, and they'll tell you that their GPU support is confined to a handful of apps. A server-side GPU is a lot more important than you think. If GPUs are standard equipment in today's physical PCs, we should expect no less from   VDI-based PCs.



That's a fair point. But with things like Direct X that will become less important over time as GDI is less important. There will be use cases for both for sure, so not dismissing it. I'm just skeptical on how long it will take to become real. If it moves into a Hypervisor function like RemoteFX and lock-in I think it will take even longer :-(


@edgeseeker + appdetective

Thanks for the diplomatic approach of telling how it is.

You're right, if it takes an hour to replicate the image to the desktop then what's the point of returning it to the datacenter? You won't need to.

I actually disagree with how XC doesn't help apps. XC is a hypervisor along with Hyper-V, XenServer, vSphere, etc. However, it's a hypervisor optimized for desktops/laptios (aka the client). In the traditional PC world apps run on the client, apps are made for client OS's run on client hardware. What RemoteFX is doing is turning your server into a desktop/server hybrid. It's like a modern non-hardware proprietary Blade PC.

The new Dell Optiplex is already on the HCL for XC, and when XC is fully released it's goal is to be pre-installed with as much models at the manufacturer level as possible. Over time if it's successfull which I don't doubt it, it will become ubiquitous.

The reason that I think distributing the computing to the end points will remain dominant rather than centralizing computing in the datacenter is because of the dramatically high costs of datacenter resources compared to the cost of the endpoints.

Don't get me wrong, I am all reved up for SBC because it provides something of pure value that can't be acheived any other way. In the future it will provide central management, fault tolerance, disaster recovery like nothing we have seen for traditional PCs. But IMO it's not going to be the main computing environment for the desktop or apps.

SBC will always highly compliment CBC (Client Based Computing), for cases where CBC is not desired or even capable.

Please feel free to dispute my opinion, I like hearing others that may have more technical insight.