Does IPeak hold the secret to making PC-over-IP good on the WAN?

One of the vendors that Gabe and I met with at Synergy last week was IPeak Networks, a company with solutions to deal with packet loss over WAN environments.

One of the vendors that Gabe and I met with at Synergy last week was IPeak Networks, a company with solutions to deal with packet loss over WAN environments. IPeak's main business today is in the telepresence and video conferencing markets, although they realized a few months ago that their stuff could potentially work for VDI and remote desktop environments as well.

The current IPeak product is a pair of hardware devices you put on each side of your WAN. They're completely transparent to the network (no IP address or anything) and they learn of each other's presence by marking unused areas of the TCP packet (similar to the way that many WAN optimization products find each other). Then they watch the traffic and figure out how many packets are being retransmitted due to loss.

Then when the IPeak things kick in, they actually slice the TCP stream into chunks and embed each chunk into multiple packets. That way if a single packet is lost, the receiving IPeak can reconstruct it on the fly from the other data it did receive. (In other words, it's like RAID for TCP packets.) Obviously there's a tradeoff between retransmits and efficiency, but the idea is that for situations where packet loss is bad (like remote display protocols) then you might be comfortable reducing your packet loss in exchange for the increase in bandwidth due to the redundancy of data.

And that's really the key: You're purposefully using a bit more bandwidth to make up for the packet loss. (Although one could argue that a retransmit due to a lost packet is already just another form of additional bandwidth consumption since the same data is sent twice.)

I specifically called out PC-over-IP in the title of this article because that's the most notorious (especially when compared to HDX) for not doing too well over the WAN. (Although that could be mostly due to the fact that it's UDP-based.) Claudio Rodrigues (Microsoft MVP, Citrix CTP, and BriForum presenter) has recently started working with IPeak and he recorded a YouTube video demo of the Win32 VMware View client connecting to a View 4 environment from his hotel in San Francisco to the ESX server in Ottawa. The IPeak thing has a management interface that reports actual real-world packet loss (in addition to the "effective" packet loss after IPeak corrects it), so this video is an actual real WAN on the real Internet. (i.e. no faking it with a WAN simulator to make the demo look good.)

And here's a YouTube video showing IPeak's effect on HDX over the WAN (via XenApp 6 on 2008 R2):

How real is packet loss today?

The demo video showed around 3% packet loss. I don't have access to any data that would show how common that is. On the one hand, I'd like to think that packet loss is a thing of the past. On the other, IPeak has a great business selling $25k IPeak things to companies who just spent $250k on telepresence solutions, so this problem is real enough that people are paying for IPeak in other areas. And it seems like what they're doing should work in our space?

Quite frankly I think this is one of the more ingenuous things I've seen in awhile, although I'm not a network guy so maybe this kind of stuff common already?

These IPeak folks definitely deserve a reward for creativity. I love the concept of applying a RAID-like system to network packets and I can think of several scenarios where I'd gladly give up some bandwidth in exchange for fewer dropped packets.

I'm hoping that as IPeak comes to market in the virtual desktop space that they release a software version of their client that could run on an end user's device or snap-in to the Citrix Receiver software.

What do you think? Is packet loss a real problem in the desktop virtualization world that we need something like IPeak to solve? Would you exchange bandwidth for reliability?

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Please dig in to Claudio's configuration as I don't believe the performance degradation is primarily due to the artificial insertion of 3% packet loss between his end user device in San Francisco and his data center in Ottawa.  The primary reason his session is performing as he is demoing is the use of PPTP with his RRAS VPN server.  The artifacting that I observed is completely consistent with environments where I've seen the similar use of a TCP based VPN connection for the PCoIP connection.  I've asked Claudio to recreate the demo leveraging an L2TP connection instead and after several e-mails he finally relented and made a casual note in his blog and video demo that he's willing to do this but there's no timeframe for when.

We can debate the use of PCoIP over the WAN at a different time but I do want to call out exactly what was claimed in the article he posted as well as his tweets versus what he showed . . .  the claim directly from his site states "how well does PCoIP perform over the real world WAN? Not that well as expected. And here is the living proof of that."  How can he claim that his demo is real world evidence when he had to artificially insert 3% packet loss into the demo?  Wouldn't real world just be more appropriately demonstrated by simply connecting to the desktop from San Francisco without the additional packet loss?  How much more real can you make real?  I'm not sure about you but I work out of my home based office and I don't purposely stick a server just before my cable modem that purposefully inserts additional packet loss between my office and the internet.

What I will say Claudio has effectively demonstrated is "how well does PCoIP perform over the WAN when inserting an additional 3% packet loss while tunneling a UDP based display protocol across a TCP based VPN connection? Not that well as expected. And here is the living proof of that."  And in this case the iPeak solution is very effective at giving a much better end user experience.


I'm the CTO of IPeak Networks and can tell you that packet loss is real, it is ubiquitous, and very unpredictable.  There are very high loss regions like Asia and Europe, but there is nowhere that you can escape from packet loss.  You never know when network conditions are going to change and cause serious degredation in VDI and remote desktop performance and pretty much every other application you may be using over the WAN.  That's why we developed our hardware, software and virtual appliance solutions to dramatically reduce the impact of loss for ALL UDP and TCP applications, including PCoIP, ICA and RDP, regardless of the transport network being used.  This isn't just a small improvement - we typically reduce loss by a factor or 20x-50x and we add ZERO latency.  If VDI is going to go global, packet loss needs to be addressed.

We have seen a very small amount of packet loss absolutely devastate the quality of video conferencing, TelePresence and all of the typical remote desktop protocols, even over a very simple network topology.

We would be happy to redo any of these tests that we demonstrate over any form of network topology. The real problem isn't the choice of WAN or VDI protocol, it is packet loss.  Packet loss ALWAYS hurts and IPeak is a very easy solution to integrate into an overall VDI / remote desktop strategy.

Packet loss is typically avoided by paying for private WANs (e.g. MPLS) but this is a very expensive solution.  With IPeak in place, we have found that customers can offload some of their applications from the expensive network to the Internet and thereby improve the performance of all of their applications and defer MPLS expansion costs.

IPeak's software and virtual appliance solutions are currently in a limited beta program for the VDI and remote desktop market and we expect to open it up in August.  Fully baked, truly plug-and-play hardware solutions are available today, starting at $990.


One thing I do want to make sure everyone understands is we are not claiming there is high packet loss everywhere, all the time, 24/7. What we know for sure, based on the information our boxes exchange at hundreds of customer sites worldwide on a daily basis, is loss is definitely there, at different degrees at different times and locations. Simple as that. We have that data.

The point of our technology is indeed to act as a protection layer that will kick in when needed. If you have no loss 98% of the time, great. I can guarantee you your users will always remember and blame you for that 2% of the time where loss was there and performance was not great. They will definitely not call you to say it is all good 98% of the time.

And of course keep in mind we help in several other cases, not VDI only. VoIP, Video Conferencing, etc.

Regardless, instead of relying on what we are saying or on what Brian saw during Synergy, simply try our stuff and see for yourselves how effective it is. As mentioned we have a BETA program in place, we do have small appliances for trials and so on. Real stuff, not vaporware.

CR (@ IPeakNetworks)


Matthew, I do not think Russel is debating whether or not your product works and helps protocols overcome packet loss, but more of a concern that you are showing PCoIP in your demo in a  scenario where it looks like the only way to effectively get PCoIP to perform is with your product.  Since PCoIP is a UDP based protocol any customer who would be using this would leverage a firewall that is UDP friendly, not something that has to do a conversion on every packet.  The competition between PCoIP and ICA is ongoing and whenever a vendor does not test equitably (Miercom, IPeak, etc.) this provides inaccurate and misleading information to customers who trust vendors and analysts to provide them with guidance.  I am sure your products do a great job where packet loss is an issue, and as you said there are many environments where packet loss does not impact the network but if your firewall does not allow UDP to pass cleanly, I am certain you will have issues.  I have tested protocols globally for many years and recently ran PCoIP from Europe to the west coast with great success for a class I was holding with no issues, actually the participants were amazed when they learned of the network topology and where the servers were hosted.  Bottom line whenever you test a protocol you need to understand how it works and that the tools you use to test are suited for the environment.  If you use a free WAN emulator to induce packet loss or latency into an environment, it is not real world and will most likely result in inaccurate results.  I think running the test again would be the fairest thing to do and leveraging the suggestions Russel made I would be very curious to see the outcomes.  Cheers


@matthew - I'm not denying that packet loss doesn't exist.  My main concern with the article as originally posted on Claudio's blog was the lack of detail around what exactly was being demonstrated and then backed up with tweets such as:

"Everyone that saw PCoIP over the WAN using our solution, dropped their jaws on the floor. Seriously we make PCoIP usable over the WAN." - May 12th, 6:55 PM

"Make sure you guys watch how PCoIP performs over the real world WAN. You know, the WAN where loss exists. Not that 'unicorn WAN'." - May 15th, 8:35 AM

"The post is live now at 720p video also available. Resuming: PCoIP sucks over the WAN but we fix it." - May 15th, 8:36 AM

"@appdetective well making something that is unusable over the WAN pretty close to ICA I would call significant. That is what we do." - May 17th, 10:13 AM

Based on my understanding of the iPeak solution I'm confident that it can improve network performance for many application & presentation layer protocols like PCoIP, ICA, Telepresence (video, audio), et al.  But if your own special technology advisor is going to claim that "PCoIP sucks over the WAN" then full disclosure of what he's doing to show this needs to be included as well.  That way a true dialog can occur regarding the solution as a whole based on facts.  There is no mention in any of these posts, and no mention was made in the video until my prodding for architectural details, that other configuration issues may have been causing the degradation in performance that was shown.

I've made specific recommendations on how changes can be made to the configuration that should result in immediate performance increases for the display.  Technologies that improve performance of poorly configured architectures don't have much value in the long term.  I'm sure iPeak CAN improve performance of a properly architected deployment though and that's what I would like you to show.  If you would like to discuss this offline Claudio has my contact info including e-mail and phone numbers.


@winviewguru I do understand what you are saying but basically what about customers that do rely on a daily basis on a TCP based VPN solution? Are you suggesting they should change it to an UDP based on because of PCoIP? Like acommodating your existing production environment because one protocol used by one vendor does not work properly under certain conditions? I think it is usually the other way around, new products I bring in must work/play nice with what I have and not the other way around.

What I am saying at the end is this: from the comments posted it seems like VMWare is really trying to say customers should be aware that in case PCoIP is used and they do have a TCP based VPN, their solution may not perform as well as the competition, being less flexible. Is this correct?

And yes, we will test it again with the same loss but using L2TP when time allows us and I will be the first one pointing the results and tweeting about it. :-)


@crod - "I think it is usually the other way around, new products I bring in must work/play nice with what I have and not the other way around."

How did you pry yourself away from IPX/SPX to leveraging TCP/IP on the LAN?  Did you upgrade to Snow Leopard?  I'm glad you don't have plans on deploying Windows 7.  Why?  Because all of these solutions introduced issues with forward compatibility for older technologies.

Every vendor has a best practice on how to deploy their solution.  Don't ignore these practices and use that as the case for saying the product is poor in those situations.


@RusselWilkinson Do understand Russel. Is there an official document from VMWare pointing out that particular deployment scenario where PCoIP may face performance issues when over a PPTP VPN? I just could not find anything. As mentioned we will definitely run more tests to show how everyone performs in a new set of conditions and the impact we have in that case. Keep tuned!


Change of subject, but my question is: does iPeak offer some of the same functions provided by other WAN optimization solutions (de-duplication and enhanced compression)? Could iPeak replace other WAN optimization options, or does it only address retransmits?


@ljames18 We only tackle the packet loss problem. But given the way we do it, nothing prevents our solution to work in conjunction of such devices like Riverbed, Branch Repeaters, Netscalers, and so on. Would be awesome to see our technology embedded on these though. :-)

CR @ IPeakNetworks


@RusselWilkinson  I think that the best way to address your concerns would be for IPeak to bring its solution to your lab.  We could then test how IPeak improves the performance of PCoIP over the WAN with multiple real-world scenarios in mind (metro, cross-country, global, different regions of the world, etc.) and with different configuration parameters.  We could then jointly publish the results.  Just let us know when to drop by.


Just to share from my experience, when it comes to video and VoIP (and for that matter other time-sensitive protocols) quality is ultra sensitive also to jitter, in addition to packet loss. So even in regions where packet loss is practically zero, jitter may still play a major factor.

A good QoS setup and use of WAN optimization devices that apply long-term disctionary based compression and improved congestion control algorithms may reduce congestion variability and will help mute related sources of jitter and packet loss.

When it comes to RTT latency and packet loss, a good annual report to read is the ICFA-SCIC report at SLAC:

The report contains a lot of information about current stats measured globally.

When interpreting the stats, here's a quote of guidance from that doc:


• At losses of 4-6% or more video-conferencing becomes irritating and non-native language speakers are unable to communicate effectively. The occurrence of long delays of 4 seconds (such as may be caused by timeouts in recovering from packet loss) or more at a frequency of 4-5% or more is also irritating for interactive activities such as telnet and X windows. Conventional wisdom among TCP researchers holds that a loss rate of 5% has a significant adverse effect on TCP performance, because it will greatly limit the size of the congestion window and hence the transfer rate, while 3% is often substantially less serious, Vern Paxson. A random loss of 2.5% will result in Voice Over Internet Protocols (VOIP) becoming slightly annoying every 30 seconds or so. A more realistic burst loss pattern will result in VOIP distortion going from not annoying to slightly annoying when the loss goes from 0 to 1%. Since TCP throughput for the standard (Reno based) TCP stack goes as 1/(RTT*sqrt(loss)) [mathis] (see M. Mathis, J. Semke, J. Mahdavi, T. Ott, "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm", Computer Communication Review, volume 27, number 3, pp. 67-82, July 1997), it is important to keep losses low for achieving high throughput.

• For RTTs, studies in the late 1970s and early 1980s showed that one needs < 400ms for high productivity interactive use. VOIP requires a RTT of < 250ms or it is hard for the listener to know when to speak.



BTW, if you don't want to go through the annual report (or wait for it) active stats and queries can be done here:


I think a lot of the real questions are being ignored. I like Claudio, straight shooter. I have not tested the product for full disclosure, but here are my concerns.

In practical terms, most of the WAN cases that really impact user performance are internal WANS for the desktop where packet loss will be a lot lower.

For Mobile users coming in over the internet, especially in some regions, packetloss is more of an issue. Sure iPeak could help here, however forward the clock 10 years, and I wonder how much this will still matter...

There is always compromise. The extra bits they put in the packet stream increase bandwidth. That is something that needs to be measured and published.

Loss characteristics need to be understood. Is the loss random, due to noise like a microwave network, or is the loss introduced due to congestion which is more likely to be bursty in nature? Even if they do things to help deal with this, one would need to better understand the additional latency introduced.

So in conclusion, I am sure this can help in some specific use cases on certain network types. It is in no way shape or form going to make POSoIP work on a WAN. UDP only is still a flawed architecture, and bandwidth consumption is still high even when POSoIP pairs back and gives you blurry images. It can’t traverse networks securely etc. POSoIP is a half assed solution and as my English friends would say, “TWATS, out there still fall for it”, amazing.....


@appdetective Like you, I hate BS and for that reason I go straight to the point and to facts.

So first of all, it amazes me to see how many people out there think the world is 'Europe' or 'US'. These are the people that call themselves 'Enterprise Architects with experience deploying global environments/networks'. BS. If you never deployed anything in remote places like Amazon, Africa, remote locations in Australia, etc, like I did, do not call yourself that. Of course in a 'global' network connecting CA to Europe, clearly there will be nowhere the loss you will see from the Amazon to Europe or from Tanzania to Europe.

I have NEVER said loss is a huge problem everywhere. No. As the Standford report linked above pointed out, loss is worse in certain places of the world for sure (and here we are talking average loss). If we go down the bursty loss route, well that is everywhere pretty much, unless you have a pretty damn good and pricey SLA for an MPLS.

Sure with unlimited money I can have a 1GBps link with no loss and almost no latency. That is not always the case in the real world (what most of our customers in Asia learned, where MPLS is very expensive and by loosing up their SLAs they can pay a lot less and get the same service once our IPQ boxes are on the link).

Another typical example is wireless connections, 3G or WiFi. Several things will cause problems, from interference to congestion. Also location plays a big role in this case. Your AT&T or Verizon connection guaranteed does NOT perform exactly the same across the USA. Same for providers in Europe or anywhere else. So there is way more to the problem than everyone is mentioning so far.

Regarding what will happen in 10 years, well if we think like that we should all stop working on anything that fixes problems that exist today or that will be there in the medium term. In 10 years, how much will many things matter? ICA, HDX, SBC? Windows for God's sake.

No one knows. We have an educated guess. That is all.

Our technology does help a problem that does exist today. Period. If you (meaning everyone that reads Brian Madden, that works on IT, etc) do have the problem or not is not the point. Again if you are working off the most connected, richest places in the world, certainly you do not face the same issues as someone in a remote location in South America. This does not mean companies should then give these people the finger and tell them to move to great US or Europe where the WAN is exactly like LAN just a little bit further apart. This is not the reality for a huge part of the world if you guys do not know that.

And again, on the relevance topic, just remember 'Claudio's Law' that says the time a PC takes to boot the latest and greatest OS and load the greatest and latest Office suite is constant. Yes, Windows 95 with Office 97 took the EXACT same time to boot as your kick a$$ Quad Core machine with 8GB and Windows 7 with Office 2010. So the bottom line is does NOT matter how much more horsepower you have, applications will find a way to use. Either efficiently or not (by using 500MB runtimes/frameworkds).

Sure down the road in 10 years connectivity hopefully will be much better. At the same time there will be probably a VioletRay player that holds 10TB movies and people will be downloading these like crazy over the Internet3. And then you are back to the same issues we face today.

Again, we deal with a problem that is here today and will be there for a long time in several different markets. This does NOT mean it affects every single person here and that every single company should use it. It just means there is a huge market for it, and that it does help several things, from Video Conferencing to Remote Display Protocols when loss hits you.

Simple as that.

Finally, If anyone is skeptical, again, we have an OPEN Beta program in place and anyone can at anytime try our software and hardware appliances and put them to the test under your conditions/environment.

Cheers and my apologies for the long reply. :-)