I wrote about VMware's View 4 announcement on Monday. Since then there have been a lot of questions raised.
I wrote about VMware’s View 4 announcement on Monday. Since then there have been a lot of questions raised. I’ve managed to get answers to some, and I’m working on getting answers to the rest. But I figured that for today I’d update everyone on what I’ve learned so far.
(By the way, I’m still working my way through the product for a full review of VMware’s software implementation of PCoIP. I hope to have that done in a few weeks. And the server I bought for that is a Dell with dual quad-core AMD 2.4GHz processors, 12GB RAM, and four disks for under $2,000! How crazy is that!?!)'
What platforms does the software PCoIP client run on?
On Monday I wrote that the PCoIP software client was only available for Windows and that Mac and Linux support would come soon. It turns out that that Linux support will come very soon, although it will only be for Linux thin client devices and available through VMware’s thin client partners. There won’t be a Linux software client available for download directly from VMware initially, although that's something they're working on too.
VMware said that vSphere 4 ups the density from 8 to 16 desktop VMs per core. Is this due to software improvements to ESX 4 itself, or to the fact that it’s running on faster hardware such as Intel’s Nehalem chips?
Here’s the response from Scott Davis, VMware’s desktop CTO:
It’s difficult to generate a single metric for VMs per core. Your mileage can vary based on the application mix and level of activity, as well as the hardware involved--processors, memory, and I/O--so there isn’t one generic number reflecting VMs per core. That said we have seen a dramatic increase in desktop density with View 4.0, vSphere 4.0, and current generation hardware. It’s the combination that delivers the overall benefit. With prior generation systems, vSphere 4.0 and View 4.0 averaged around hosting 8 VMs per core. The current generation of processors and servers are far more powerful and designed for virtualization with many innovations with respect to both core densities and technology optimizations including both processor virtualization assists and virtualization-oriented memory management hardware. These are not automatic benefits, i.e. you don’t get them by just running VI 3.5 on them. We made numerous optimizations in vSphere 4.0 to make use of these hardware advances. For example, with the Intel Nehalem class processors, we have support for EPT and large pages and we optimized the way the VMM handles VM exits. These changes deliver benefits with the new generation of hardware and are not used on older hardware. With such newer hardware technologies and vSphere/View 4.0, we have observed 16 (and in some cases higher) VMs per core with knowledge worker load profiles.
As we discussed during our call on Friday [This is Scott referring to the call that I had with him on Friday], VMware also made numerous changes to algorithms in different aspects of vSphere not directly related to new hardware support, specifically for improving aspects of VDI scalability. Areas include VMFS file system, guest and hypervisor I/O drivers, network drivers, core scheduler, management control processes (hostd, vmx), etc. Generally these changes improve the user experience, the system administrator experience and scalability, and they reduce cost through greater consolidation of resources while maintaining the level of experience. Relevant metrics that we looked to improve included VM power on times, storage and memory consumption, VMs per cluster, VMs per memory and core, VMs per LUN, VMs simultaneously booted, etc. And these metrics are not just per VM/User, we measure and optimize for bulk operations. I reviewed many of these with you during Friday’s session.
Given all this, it’s not meaningful nor would I be comfortable stating a single VM per core metric for View 4/vSphere 4 nor do I consider it useful to try and attribute specific gains to hardware or software. System or total solution performance, scalability and cost are the important metrics and the View 4 with vSphere 4 delivers in all three dimensions.
Is ThinPrint or TCX supported when brokering a connection to a blade or TS session? If not, why not?
Thinprint or TCX is not supported for blade or TS session. (Note they did not answer the “why not” portion.)
Any ETA on Win7 support (i.e. Moving it out of “experimental” mode?)
Right now the View 4 Connection Server must be installed on Windows 2003, i.e. Win 2008 is not supported. When will this change?
What are the multi-monitor support options?
Four displays, 1920x1200 each, 32-bit color, “L”-shaped configurations, auto-fit to clients
Is there built-in SSL-VPN support?
No. VMware recommends the Cisco VPN soft client.
How is the software PCoIP different from the hardware PCoIP?
This is a big question that I’m trying to answer fully. Right now I can say that there are several differences. First of all, there are a few new “little” features, like dynamic audio quality adjustment in the software version.
But more importantly, VMware re-engineered quite a bit of the way that the PCoIP protocol components run on the host. In the previous hardware-based implementations, the PCoIP input was literally the DVI cable output from the remote host workstation. It was just looking at a stream of pixels and nothing else. But VMware moved that processing into the VM where it can get access to the GDI and the Windows graphics stack which has allowed them to make some modifications to the host encoding capabilities that were not available previously to Teradici. (The software version of PCoIP accesses all of this via its own virtual display adapter, just like RDP and ICA, which is obviously very different than the hardware version.) This is what allows VMware to tweak how PCoIP is encoded based on whether it’s looking at text, graphical applications, video, flash, etc.
So actually I think yesterday’s blog post by Juan Rivera (the guy at Citrix who leads HDX) is not entirely accurate. It seems he’s assuming the software version of PCoIP works exactly the same as the hardware version, which is doesn’t. (By the way, I think one of the reasons people assume this is because the hardware and software PCoIP clients are fully inter-operable, so there’s an assumption that they all must be doing pretty much the same thing. But I think there’s a capabilities exchange that takes place, so if a software PCoIP host knows it’s talking to a hardware PCoIP client, then it will be sure to send data in a format that client can decode. But it’s not like the protocol is identical for every connection scenario.) For example, VMware moved their multimedia redirection into PCoIP, so it's definitely NOT 100% host-side rendering like Juan said.
Again, I haven't had a chance to really bang on this yet, so I can't get too more specific at this point. But I'll be testing that soon.
Can View broker a PCoIP connection to a blade with a hardware PCoIP card?
From VMware’s Warren Ponder:
Yes, currently we will broker to blade PCs/rack workstations that have Tera 1 host cards. This mostly is the sweet spot for the higher end workstation market. ( designers, illustrators, EDA, etc.) The workstation market is not huge but there is reasonable business there mixing blades and virtual desktops. Typically these guys need a dedicated machine for their graphics work and a secondary machine for their productivity work. This allows them to switch between the two or run them in parallel.
There will need to be a firmware change on the host card to work with the soft client. That will come approximately 90 days post GA. But all the client/broker work is done.
We maintain compatibility between the software and hardware PCoIP solutions via the following combinations:
- PCoIP zero client (i.e. hardware client) to VM
- Soft client to PCoIP physical host
- PCoIP zero client to PCoIP Physical Host
- Soft client to VM